report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
To help ensure that surface transportation—including public rights-of- way—is accessible, the ADA includes specific provisions for public entities (in Title II) and private entities (in Title III) for providing accessible transportation. Transportation-related requirements in the ADA and associated regulations vary by mode. In general, however, new vehicles that were purchased or leased after August 1990, and buildings or facilities that are constructed or altered after August 1990, must be accessible. Other requirements include the following: Transit authorities must provide comparable paratransit services to those individuals who are unable to use fixed-route bus or rail services because of a disability. These services are typically provided using wheelchair- accessible vans, small buses, or taxis. Existing intercity rail (Amtrak), commuter rail, light rail, and rapid rail systems were to have at least one accessible car per train as of July 26, 1995. Existing “key stations” in rapid rail, commuter rail, and light rail systems were to have been made accessible by 1993 unless certain extensions permitted by law were granted. Most existing stations currently served by Amtrak must be accessible by July 26, 2010. Commercial bus companies must provide an accessible bus with 48-hour notice, among other things. Entities such as hotels that generally offer transportation (shuttle service to the airport, for instance) must provide equivalent transportation services for people with disabilities. For public rights-of-way, all projects for new construction that provide pedestrian facilities must incorporate accessible pedestrian features. Projects altering the usability of the roadway must also incorporate accessible pedestrian improvements. Figure 1 shows examples of some of these accessible transportation features. A number of federal agencies have a role in implementing, overseeing, and enforcing the ADA’s surface transportation requirements. We will discuss them in more detail later in this report, but their general roles and responsibilities are as follows: The Access Board is an independent federal agency devoted to accessibility for people with disabilities. The board develops and maintains design criteria for facilities and transit vehicles (these design criteria are not enforceable until implemented in DOJ or DOT regulations). It also provides technical assistance and training on these requirements and on accessible design. DOJ has responsibility for publishing federal regulations governing access to public and commercial services. DOJ also has responsibility for investigating alleged ADA violations by private entities, including transportation providers, and conducting compliance reviews. DOJ refers allegations of ADA violations by public transportation entities to DOT for investigation. DOJ also may commence civil action in U.S. district court under certain circumstances. DOT is responsible for publishing federal regulations for carrying out the transportation provisions of the ADA. Offices within DOT have the following responsibilities: Office of the Secretary of Transportation (OST) promulgated DOT’s regulations for the ADA and Section 504 of the Rehabilitation Act. OST also coordinates and approves DOT guidance and interpretation for transportation accessibility. For example, OST issued a Notice of Proposed Rulemaking (NPRM) in February 2006 in which it proposed changes to ADA regulations, including revising commuter and intercity rail station platform requirements and clarifying public transit providers’ responsibilities to modify their services when needed to ensure program accessibility. OST also sought comment on how to accommodate changes in mobility devices used by individuals with disabilities, among other things. Federal Highway Administration (FHWA) is responsible for implementation of program access to individuals with disabilities by state departments of transportation and other FHWA aid recipients, including pedestrian rights-of-way access requirements from the ADA. Federal Motor Carrier Safety Administration (FMCSA) informs commercial bus companies of their ADA responsibilities and collects data on ADA compliance. Federal Railroad Administration (FRA) is responsible for overseeing federal grants to Amtrak, including ADA provisions. Federal Transit Administration (FTA) is responsible for overseeing federal grants for public transportation, which includes compliance with ADA requirements for public transportation systems, including ADA-complementary paratransit. Two other agencies—one inside DOT, the other outside—also have roles in ADA compliance. The National Highway Traffic Safety Administration (NHTSA), within DOT, establishes federal motor vehicle safety standards for platform lifts and vehicles equipped with platform lifts (including commercial buses and public transportation vehicles). NHTSA is also responsible for ADA compliance of state motor vehicle agencies that receive federal funds. In addition, the National Council on Disability (NCD), an independent federal agency, gathers information about the implementation, effectiveness, and impact of the ADA. NCD also reviews and evaluates federal policies, programs, practices, and procedures concerning people with disabilities, and all statutes and regulations pertaining to federal programs that assist people with disabilities, to assess their effectiveness in meeting those needs. Several major interest groups and industry associations also play a role. For example, the Disability Rights Education and Defense Fund provides ADA-related training, technical assistance, and legal services, and advocates on behalf of people with disabilities. Also, the American Public Transportation Association has an Access Committee designed to promote successful implementation of the transportation provisions of the ADA by facilitating information sharing and monitoring and reporting to its members on the status of pending litigation, among other activities. A number of reports by national organizations indicate that transportation accessibility has improved since Congress passed the ADA. For example, NCD reported that the ADA has resulted in a significant expansion of lift- and ramp-equipped buses, more accessible fare collection technology, and increased availability of formats for disseminating accessible information. Because of increased regulation, vehicles are of higher quality, and travel has become more efficient. However, disability advocates have said (and many federal agencies and industry associations agree) that there are still problems. In a 2002 survey conducted by DOT’s Bureau of Transportation Statistics, a greater percentage of people with disabilities reported having problems with several modes of transportation as compared with people without disabilities (see fig. 2). Complaints and lawsuits, among other sources of information, indicate accessibility problems persist. DOJ has referred more than 500 ADA-related surface transportation complaints to DOT since 2000 and is investigating 36 additional cases, as of July 2007, according to DOJ officials. Also, additional complaints go directly to DOT. Finally, private parties have filed numerous lawsuits alleging violations of surface transportation and public rights-of-way accessibility requirements. Providing accessible transportation and public rights-of-way can be expensive, especially if an entity has to modify existing structures or purchase new equipment. Congress recognized this and phased in many of the requirements over time. Unlike other situations in which Congress identifies transportation priorities and provides grants or other funding sources to help entities address those priorities, there are few funds that are specifically targeted for ADA compliance. The ADA is a civil rights law, not a transportation program; however, many federal transportation funding sources can be used to comply with ADA requirements. (See app. I for more information on these sources.) Other than for public transit, the extent of compliance with the ADA’s requirements for surface transportation and public rights-of-way is unknown because little reliable information is available, although there are indications that accessibility is improving. DOT collects some accessibility data from urban public transit agencies and helped fund several surveys to determine certain accessibility information for rural and specialized transportation services. Much of the data for other modes, however, are either unreliable or still being developed. For public transit, data are available on the percent of vehicles and stations that are wheelchair accessible in urban areas. DOT reports that accessibility in the urban transit vehicle fleet is increasing as new, accessible vehicles are replacing older ones, since the ADA requires that all new or refurbished transit vehicles be accessible. In 1989, before passage of the ADA, 36 percent of public transit buses in the United States were accessible. By 2005, 97 percent were lift- or ramp-equipped, according to FTA’s National Transit Database. However, accessibility varies significantly by mode of transportation. For example, only 51 percent of commuter railcars were accessible in 2005. ADA regulations do not require that transportation providers make all railcars accessible immediately; rather, that they make the fleet accessible over time as they purchase or lease new cars, and that they provide at least one accessible car per train. According to FTA officials, transit buses are more likely to be accessible than railcars because, on average, railcars have a longer life span. For example, buses are replaced about every 10 to 15 years, while according to Amtrak officials, railcars are planned for replacement after 30 to 40 years but may last over 50 years under certain circumstances. The ADA also requires that new transit facilities (including stations) and alterations to existing facilities comply with federal accessibility standards, and FTA has tracked this since 2002. By 2005, FTA’s National Transit Database reflected that transit agencies reported that 71 percent of total transit stations were ADA-compliant. Also, while limited, some dated estimates of accessibility in rural areas and for special service transportation exist. In a survey conducted by the Community Transportation Association of America in 2000, an estimated 60 percent of the transit fleet in rural areas was lift- or ramp-equipped, as compared with 40 percent in 1994. Also, in 2002, approximately 37,700 special service vehicles were used by approximately 4,800 special service providers including religious organizations, senior centers, rehabilitation centers, and other private and nonprofit organizations to transport seniors and persons with disabilities. The majority of the special service providers were located in rural areas. Of the special service vehicles purchased in 2002, about 76 percent were accessible (approximately 28,700 vehicles). Although available data indicate increasing accessibility of transit vehicles, requirements in ADA regulations extend beyond having lift- and ramp- equipped vehicles. Other requirements include properly maintaining the vehicle lifts and ramps and announcing transit stops. According to an FTA official, there are no national data on compliance with these two requirements, although FTA’s periodic compliance reviews provide the agency with some information about the state of compliance. We heard from a number of federal agencies and local and national disability groups that these areas continue to be a problem for transit agencies, making it difficult for individuals with disabilities to access the public transit system. FTA also maintains data on key rail stations, which were required by the ADA to be fully accessible by 1993 (with extensions permitted through July 2020 for extraordinarily expensive structural changes). According to FTA, as of June 2007, of the 687 key rail stations identified in transit systems nationwide, 321 were found to be fully compliant with ADA requirements, 311 were functionally accessible but not fully compliant, 28 were not accessible, and 27 were proceeding under approved time extensions. While the number of ADA-compliant stations is still relatively low, this is a substantial improvement over the 52 key rail stations (8 percent) that FTA identified as fully compliant in 2000. In addition to fixed-route transit, FTA also oversees ADA-complementary paratransit, which will be discussed in the next section. For public rights-of-way and many modes of surface transportation, such as intercity passenger rail, less is known about ADA compliance because much of the information is unreliable or still being developed. DOT and DOJ data indicate that relatively few individuals file transportation-related ADA complaints with federal agencies; however, complaints are not a reliable indicator of compliance. For intercity passenger rail service, Amtrak has data on the accessibility of its railcars but is still developing information on station accessibility. Amtrak officials indicate that all new or remanufactured Amtrak equipment is accessible, in accordance with the ADA. For example, all of the cars on the Acela high-speed rail service in operation on the Northeast Corridor are accessible because the cars were manufactured and placed in service in or around 2000-2001, according to Amtrak officials. As of June 2007, 82 percent of Amtrak’s 1,451 passenger cars were fully accessible to people in wheelchairs. FRA officials said that Amtrak appears to be on schedule to have all of its passenger cars ADA-compliant by the end of 2008. Every train is also required to have a number of wheelchair spaces (for those who want to sit in their chairs) and accessible seats (for those who want to store their chairs and sit in a seat) equal to the number of coaches. For instance, if a train has four passenger cars it must have at least four wheelchair spaces and four accessible seats somewhere on that train (but not more than two in each car). Amtrak has policies and procedures in place to ensure that these requirements are met. The requirements are specifically explained in the station master’s guidance for each route and updated every 6 months when schedules change. Amtrak keeps an internal record of instances when it is unable to meet the accessibility requirements for each train, due to such things as mechanical failure. In addition to requirements for accessible cars, the ADA requires Amtrak to make most of the stations that it serves fully accessible by July 2010, even if Amtrak does not own the station. According to Amtrak, transportation personnel check each station for wheelchair accessibility and report that information to Amtrak every 6 months for inclusion in Amtrak’s timetable. As of June 2007, 45 percent of the 479 stations that Amtrak serves were fully accessible to people in wheelchairs. An additional 31 percent had barrier-free access between the street or parking lot, station platform, and trains, although individual facilities (such as restrooms and ticket counters) may not be accessible. Amtrak officials said that these stations serve 97 percent of passenger boardings and deboardings. ADA requirements extend beyond wheelchair accessibility, however, such as requiring accessible telephones and detectable warnings at platforms. One difficulty that Amtrak cited in making existing stations accessible is that ADA regulations define “stations” to include platforms, making it difficult to determine who is responsible for making and paying for changes when one entity owns the station building (often a public entity), and another entity owns the platforms (typically a private entity such as a freight railroad). Amtrak officials said that this is impeding Amtrak’s overall progress in ensuring that stations are ADA compliant. Although Amtrak has had 17 years to make its stations accessible, in its 2008 grant and legislative request, Amtrak said that insufficient time and funding are likely to prevent full compliance at all station stops by the required deadline. Amtrak estimated the cost of compliance for all stations to be approximately $250 million and requested $50 million in ADA funding for fiscal year 2008 above its base grant request. In addition, Amtrak asked Congress for an extension of at least 5 years after promulgation of DOT’s final regulations on station platforms (discussed in the next paragraph) to meet its statutory obligation on ADA compliance. Amtrak officials say that they requested the extension in part because DOT issued an NPRM in February 2006 that would raise the required height for certain new intercity passenger and commuter rail station platforms to eliminate the need for wheelchair lifts. DOT plans to finalize this rule by early 2008 and has received a significant number of comments on it. Amtrak officials said that, if DOT finalizes the regulation in its current form, implementing the rule would take significant additional cost (potentially more than twice as much as it would cost without the proposed rule) and time to comply. In addition, Amtrak officials expressed concern that if the cost of complying with that regulation becomes too high, Amtrak may have to eliminate service at certain smaller stations rather than make those stations fully accessible. The officials also expect to receive complaints from freight railroads that the elevated platforms would interfere with their freight railcars that run on the same tracks as Amtrak. On the other hand, DOT officials believe the cost differential between the current requirements and the additional proposed requirements is negligible for many of the Amtrak stations to which these proposed requirements would apply. Moreover, FRA officials believe that conflicts with freight traffic are likely to be minimal and that there are well known and moderately priced techniques that can mitigate conflicts that occur. Further, the other station accessibility requirements—such as for restrooms, parking, signage, and curb ramps—have not changed since 1991, and DOT officials said that their proposed regulations should not be at fault for any delay or other problems Amtrak may face in addressing ADA requirements. In part due to Amtrak’s slow progress in implementing the ADA, FRA’s grant agreements for fiscal years 2006 and 2007 required Amtrak to assess the accessibility of the stations that it serves, identify the steps needed to make them accessible, and report to FRA by September 2006 and May 2007 on its status. Amtrak is in the process of surveying intercity passenger rail stations to determine their accessibility and has hired a contractor to help in this effort, but the study has not been completed to date, limiting the available information on ADA compliance. Of the 479 stations that are required to be fully accessible, Amtrak had assessed 371 (77 percent) by June 2007 and expects to have the remaining assessments completed by December 2008, according to an Amtrak official. Amtrak reported some preliminary findings in a briefing to its board of directors in June 2007; however, the briefing did not include specific, station-by-station information on accessibility, the estimated cost to bring each station into compliance, or a schedule for achieving full compliance. An Amtrak official said that Amtrak cannot determine the cost or time frame for achieving compliance without knowing whether DOT’s proposed requirements for rail platforms will be finalized or whether Congress will appropriate additional funds. Additionally, ADA regulations require that DOT periodically conduct reviews of Amtrak’s compliance with ADA requirements. FRA does not conduct such reviews, further limiting the availability of data on Amtrak’s ADA compliance. There are limited data available on ADA compliance among commercial bus companies. ADA regulations require all commercial bus companies to provide accessible service within 48 hours of a request—either using the company’s own buses or contracting services from another company. Large, fixed-route companies must also purchase lift-equipped buses when acquiring new vehicles and were to have 50 percent of their vehicle fleets accessible by 2006 and to have 100 percent of the fleets accessible by 2012. DOT regulations require commercial bus companies to report to FMCSA annually on the number of their accessible vehicles, requests for accessible service, and their ability to meet those requests. FMCSA includes information about this reporting requirement on its Web site and sent letters and e-mails to all registered companies starting in 2004, reminding them of their obligations. The two major industry associations also urged their members to respond to FMCSA’s data request. However, 13 percent of companies reported this required data in 2006, compared with 21 percent in 2005 and 16 percent in 2004, and FMCSA does not have the authority to fine companies for failure to comply with these reporting requirements. FMCSA also does not verify the reliability of the commercial bus companies’ self-reported data before forwarding the data to DOJ. Furthermore, DOT’s regulations stated that DOT would analyze data on demand-response commercial bus companies by October 2006 to determine the extent of ADA compliance and evaluate whether the agency’s regulations should be revised. DOT was to also conduct a similar study for fixed-route commercial buses by October 2007. Neither study has been completed to date. After an internal disagreement within DOT about which agency is responsible for conducting these studies, officials from DOT’s OST and FMCSA recently decided—in response to our preliminary findings—to work jointly to produce these reports, with participation from FTA and other DOT modal administrations. According to DOT, its General Counsel’s office will meet with key officials from FMCSA and other concerned DOT organizations to finalize plans for completing the study. DOT expects to issue its results during the first quarter of calendar year 2008. In the meantime, however, the status of compliance of commercial bus companies is unknown. According to agency officials, FMCSA has received two ADA-related complaints regarding commercial bus passenger service since 2001, and DOJ has received relatively few complaints about the accessibility of commercial buses. However, FMCSA identified several possible ADA violations among small commercial bus companies during compliance reviews and forwarded that information to DOJ for possible investigation. Furthermore, despite the small number of complaints to federal agencies, reports in the media and several recent and ongoing court cases indicate that there may be compliance issues among some commercial bus companies. For example, in November 2006, Peter Pan Bus Lines brought suit against FMCSA with the allegation that the modal administration had not ensured that another commercial bus company was complying with the ADA. Little is known about compliance by small charter-tour companies, but according to DOT officials, they have limited anecdotal evidence suggesting that many such companies are unaware of ADA rules or do not comply with them. There are no national data on the accessibility of public rights-of-way, in part because there are no requirements for either FHWA or DOJ to collect such information, although individual localities may collect this information. The ADA does not require localities to retrofit existing public rights-of-way (such as curb ramps) to make them accessible, unless deemed necessary to ensure public access to programs or services— including state and local government offices, places of public accommodation, places of employment, and transportation, among other things. However, after January 26, 1992, any new construction, alteration, or renovation (including road resurfacing) must comply with DOJ regulations. Many localities are also required to inventory the accessibility of public rights-of-way under their jurisdiction as part of developing an ADA-required transition plan for improving that accessibility. Many of the national and local disability advocacy groups we spoke with, however, said that access to public rights-of-way is still a major barrier to the mobility of people with disabilities. For example, a local disability advocacy group cited several recent examples in which a locality had a major construction project in the downtown area where the renovated sidewalks and medians did not include curb ramps and were inaccessible (see fig. 3). Some groups added that inaccessible routes to bus stops also hinder access to public transit. Also, we heard from local officials that, in some instances, curb ramps have been installed, but are not fully compliant with federal regulations. For example, officials from one major urban area said that although the locality installed curb ramps, the ramps are too steep and are not well maintained. One difficulty is in determining who is responsible for making rights-of-way accessible. For example, providing access to bus stops can require coordination among the public transit provider, the local government office that oversees the street, and the local government office that oversees the sidewalk. FHWA officials agreed that no data are available on the status of compliance with public rights-of-way. However, they have started to visit states to determine if they have transition plans or plans to meet accessibility obligations using DOJ guidance as a tool. While this will not provide data on actual accessibility, it should provide information on whether or not a state has a plan to meet accessibility requirements. There are also no data at the national level on the accessibility of private transportation—including taxi and limousine service—because there are no requirements to collect this information. Available anecdotal information suggests some successes in improving access to private transportation, including rental car shuttles and hotel shuttles, but the lack of national data precludes determining the extent of accessibility among various private transportation providers. The ADA does not impose any fleet accessibility requirements for private providers and does not require that most individual vehicles (e.g., taxis) be accessible. Under ADA regulations, however, private providers must accommodate service animals (such as guide dogs) and may not discriminate against people with disabilities or charge them a premium for accessible service. Several private companies and trade associations told us that providers may choose not to purchase accessible vehicles because the economic benefits do not outweigh the additional overhead cost and maintenance expenses. According to an official from FTA, there are no data at the national level to accurately measure how well entities are complying with the requirements under the ADA to provide complementary paratransit service to individuals with disabilities who are unable to use the fixed-route system. Individual transportation providers collect information on the number of paratransit rides provided and report these data to FTA, but the number of rides is not a good measure for determining ADA compliance because the data do not indicate whether transportation providers are granting rides in all eligible circumstances or whether response times are comparable to fixed-route service, for example. Likewise, FTA collects data on the number of demand-response trips—that is, trips in which vehicles respond to passenger requests for service. While ADA-complementary paratransit trips constitute the majority of such trips, the FTA official said the two types of data are not interchangeable and cannot be used to determine the extent of compliance with paratransit requirements under the ADA. FTA officials noted that, while they do not have nationwide data on compliance with the requirement for ADA-complementary paratransit service, FTA does have standards that systems are expected to meet. FTA also has knowledge about the compliance of individual systems that it has reviewed or investigated. According to FTA officials, ADA compliance rates are subsequently high among the paratransit systems that they have reviewed. Paratransit ridership has increased since the ADA, and although more individuals with disabilities are being served, anecdotal evidence suggests compliance with some ADA regulations is still a problem. For example, according to Easter Seals Project ACTION and a 2005 National Council on Disability report, some paratransit providers deny rides to people who may be eligible under the law or fail to provide rides to eligible individuals in response to requests made the previous day, as required by federal regulation. Transit agencies also struggle to balance providing complementary paratransit service with the increased cost of accommodating a growing ridership. DOT and DOJ data indicate that relatively few individuals file transportation-related ADA complaints with federal agencies. Examples of this data are as follows: In 2005, the most recent year for which complete data were available, FTA received 124 ADA-related complaints, FRA received 22, and FHWA received 22. DOJ forwarded 112 transportation-related ADA complaints to DOT in 2005. According to DOT officials, many of these are included in the totals listed above. FMCSA has received at least two ADA-related complaints regarding commercial bus passenger service since 2001. A relatively low number of federal complaints may not indicate a high level of compliance with regulations. For example, in another civil rights area, fair housing, the Department of Housing and Urban Development conducted several studies of discrimination against individuals looking for housing. Their findings indicated that discrimination occurred at higher rates than the number of complaints would indicate; one study showed that only 1 percent of individuals who believed they had experienced housing discrimination reported the discrimination to a government agency. We heard from a number of local and national disability groups that most transportation users are not aware they can file a complaint at the federal level. DOJ and DOT, which share responsibility for ADA oversight and enforcement, face three main difficulties in ensuring compliance with the ADA. First, there are uneven levels of oversight and enforcement among the DOT modal administrations, leading to gaps for some transportation modes. Second, the same lack of data that precludes a clear understanding of the extent of compliance also prevents agencies from targeting oversight and enforcement activities and evaluating the effectiveness of these efforts. Third, DOT officials indicate their enforcement options are of limited use, which suggests a need for additional options. In a number of instances, compliance has not come through federal agency enforcement but through private citizens filing lawsuits and negotiating settlements. The ADA divides oversight and enforcement authority between DOJ and DOT, but there are differences depending on the type of transportation. Although some agencies have a framework in place that allows comprehensive oversight, the lack of such a framework in other agencies and the manner in which responsibility is shared results in gaps in oversight and enforcement for intercity passenger rail and commercial bus and to possible duplication of effort for public rights-of-way. For public transit, DOJ and FTA have used formal means to clarify responsibilities and ensure coordinated and consistent oversight and enforcement. Under the ADA, responsibility for oversight and enforcement rests partly with DOT and partly with DOJ. In general, DOJ issues regulations that govern public rights-of-way and oversees and enforces compliance with those regulations and has enforcement authority over public and private transportation providers. DOT issues regulations that govern both public and private transportation providers and oversees public compliance with those regulations. Under the regulations issued by both agencies, DOJ and DOT have authority to receive and investigate complaints of discrimination and to perform compliance reviews. In addition, DOT’s modal authorities—primarily FTA, FHWA, and FRA—distribute federal grant money to many of the entities they oversee. Any recipient of federal financial assistance from DOT must certify that it is in compliance with applicable federal laws, including the ADA. DOJ, FTA, and FHWA have established an oversight and enforcement framework that includes investigating complaints and performing various types of reviews to identify noncompliance with regulations. For example, in response to a complaint, DOJ investigated a taxi company for refusing to provide a ride to a person who is blind and uses a guide dog. DOJ entered into a settlement agreement with the taxi company, which agreed to provide ADA training to all its current and future drivers and dispatchers. In another example, DOJ negotiated settlement agreements with six taxi service providers to eliminate surcharges or bans on travelers with service animals or wheelchairs. DOJ officials told us that because they receive few transportation-related complaints regarding private entities, and they consider transportation to be a high-priority area, DOJ investigates almost all transportation-related complaints that appear to state a violation. FTA and FHWA also have a record of receiving and investigating complaints. In one instance, complaints in one state regarding the installation of accessible pedestrian signals triggered FHWA to work with the state highway office to draft a plan to address pedestrian accessibility issues. Similarly, these agencies conduct reviews to determine compliance with their respective regulations. Following are examples of some of these reviews: In one such effort, DOJ initiated a program called Project Civic Access that, as of June 2007, had included reviews of 143 localities’ compliance with accessibility requirements, in some cases including public rights-of- way. DOJ selects the entity to be reviewed based on a number of criteria, including complaints, relative population of people with disabilities, and geographic diversity. These reviews usually result in a formal agreement between DOJ and the entity, which includes specific steps to be taken to come into compliance and a time line for completion. For example, DOJ conducted a review of the City of Omaha, Nebraska, and, based on the results of the review, entered into an agreement whereby the city agreed to provide, over a 9-year period, curb ramps at all intersections that had been built or modified since the effective date of the ADA. FTA conducts at least two different types of oversight reviews of recipients of its grant programs and, in cases where it identifies noncompliance, works with the audited entities to ensure they comply. These oversight reviews include periodic comprehensive reviews of all grant recipients (such as statutorily required triennial reviews and state management reviews) and discretionary targeted ADA compliance reviews. The latter category are usually focused on one of the following discrete areas: ADA-complementary paratransit service; fixed-route bus lift or ramp maintenance and reliability; fixed-route bus stop announcements and route identification; rail stop announcements and route identification; or key, new, or renovated rail station compliance. For example, FTA found in the course of a compliance review that one local agency was improperly denying ADA-complementary paratransit service to some individuals who should be eligible under the ADA. The agency made several changes to its eligibility determination process in response to FTA’s recommendations. FHWA conducts three types of reviews of state transportation agencies— process reviews, program reviews, and compliance reviews—each of which can focus on ADA-related issues. For example, FHWA conducts a compliance review to determine whether a state transportation agency is properly fulfilling its legal or regulatory responsibilities when it receives a complaint or other indication that a state may not be in compliance with the ADA. The review would determine whether the state is installing curb ramps in pedestrian facilities that are constructed with federal funds or when roads with pedestrian crossings are newly constructed or altered. The two other modal administrations, FRA and FMCSA, have taken much more limited roles and do not have a framework for conducting ADA oversight. FRA does not have authority over Amtrak’s day-to-day customer service, but Amtrak is defined by law as a public entity for ADA purposes and is, therefore, subject to DOT’s regulatory enforcement provisions. ADA regulations require DOT to conduct investigations and initiate compliance procedures. FRA does not conduct any reviews that assess Amtrak’s compliance with ADA regulations, although FRA is monitoring Amtrak’s progress in assessing station accessibility. FRA officials also told us that they plan to conduct reviews of Amtrak’s service delivery to riders with disabilities in the future. FRA officials said that when they receive ADA-related complaints about Amtrak, the first step in the investigation is to forward the complaint to Amtrak for its review, investigation, and possible settlement. FRA officials said they do not have sufficient resources to investigate all complaints themselves. They said that they review Amtrak’s proposed resolution including, in many cases, contacting the complainant to determine if he or she is satisfied with the outcome. In a few instances, FRA did not agree with Amtrak’s proposed resolution or determined that a complaint reflected an area of broad significance and intervened. In those instances, FRA further investigated the complaint and had Amtrak sign agreements with FRA describing steps Amtrak will take to prevent future discrimination. FRA officials described other ways in which the agency provides ADA- related oversight of Amtrak besides reviewing complaints or conducting compliance reviews. For example, FRA provides oversight through administration of Amtrak’s grant agreements, as previously discussed. In addition, FRA reviews and approves the plans or designs for certain new passenger cars and station platforms, upon referral by Amtrak. FRA officials have physically inspected new or soon-to-be renovated stations to give technical advice on how to assure compliance, according to FRA. Nevertheless, without FRA conducting direct oversight, Amtrak is largely responsible for ensuring its own compliance with the ADA. FMCSA’s role is also limited: FMCSA officials told us that they have the authority to conduct oversight of ADA compliance by commercial buses but do not do so because of competing priorities for their oversight resources, such as safety issues. In addition, FMCSA has asserted that it does not have the authority to withhold or revoke a bus company’s operating authority on the basis of noncompliance with the ADA, although this position has been disputed in court, which reversed FMCSA’s decision and directed FMCSA to reexamine the statute. FMCSA officials told us that they forward any complaints to DOJ because they do not have enforcement authority for the ADA. In addition, officials said that if they become aware of possible violations of ADA regulations, they will forward that information to DOJ for resolution. For example, as part of a concerted effort to inspect commercial buses for safety violations in 2005, FMCSA identified 10 possible instances of ADA violations and provided the information to DOJ for further review. FMCSA officials also said that they are considering developing a checklist that would include some component of ADA compliance for use in some or all of their safety inspections, but this idea is in the very early stages of development. FTA and DOJ have taken a formal step to clarify and strengthen their respective roles and ensure coordinated and consistent enforcement. In 2005, these two agencies signed a memorandum of understanding addressing each agency’s role in ADA oversight and enforcement. The memorandum provides that FTA will, with assistance from DOJ, investigate suspected violations of the ADA, seek informal resolution in instances of noncompliance, and refer cases to DOJ or withhold federal funding if it is unable to resolve compliance issues. For its part, DOJ will, once FTA refers a case, pursue further enforcement action with coordination and assistance from FTA. Although the agreement has not resulted in any referrals from FTA to DOJ, officials from both agencies told us that simply having a formal relationship and a requirement to meet periodically has been helpful. FRA and FMCSA do not have formal working relationships with DOJ or a memorandum of understanding to clarify their respective responsibilities in overseeing ADA compliance. Gaps appear in ADA oversight for Amtrak and commercial buses because responsibility is not clearly defined, as follows: Amtrak—FRA provides limited oversight of Amtrak but has not referred any suspected instances of noncompliance with ADA regulations to DOJ for further enforcement action. Commercial buses—FMCSA does not conduct oversight of commercial buses for compliance with ADA regulations. FMCSA conducts oversight of commercial buses for compliance with safety regulations, however, and, therefore, appears to be in an ideal position to conduct ADA oversight. DOJ officials said they have responded to information provided by FMCSA and initiated reviews of some commercial bus operators. DOJ officials also commended FMCSA for being proactive in sharing information and said that the informal relationship they have developed over the last 3 years has been mutually beneficial. However, neither FMCSA nor DOJ has a program in place to conduct ADA oversight reviews on an ongoing basis. While there does not appear to be a similar gap in oversight of public rights-of-way, DOJ and FHWA could also benefit from better coordination. DOJ and FHWA officials said they work closely on ADA issues, but they do not do so formally. Both agencies provide compliance assistance and conduct similar oversight of public rights-of-way efforts, which could potentially overlap if the agencies are not aware of each other’s activities. DOJ and FHWA officials told us that the agencies could benefit from better coordination by sharing data and expertise and by eliminating possible duplication of effort. Most agencies lack the information needed to target their ADA enforcement efforts and to determine the effectiveness of their oversight activities. The exception is FTA, which collects data on accessibility and compliance through its triennial, state management, and ADA compliance reviews and uses this information to evaluate each grantee annually to determine the appropriate level of oversight required. FTA also focuses its ADA compliance efforts on areas that it has identified through experience and data analysis as problematic: paratransit operations, bus lift maintenance and usage, and stop announcements. By contrast, FRA, FMCSA, and FHWA lack reliable data to determine the extent of compliance with the ADA requirements for which they are responsible. Without this information, agencies cannot target their oversight activities, establish performance goals and measures, or monitor progress to gauge the effectiveness of their oversight efforts. The general lack of data about ADA compliance at FMCSA, FRA, and FHWA is in marked contrast to those agencies’ use of data to target oversight activities in other areas. For example, in reporting on FMCSA’s motor carrier truck enforcement efforts in 2005, we noted that FMCSA’s enforcement approach uses major risk factors identified as contributing to crashes and that FMCSA targets its enforcement resources at the motor carriers that it assesses as having the greatest crash risk. The agency uses information that it collects and maintains about carriers’ safety performance (including crash history and results of roadside inspections and compliance reviews) to identify these unsafe carriers to be targeted. In addition, FMCSA has several information systems and a program to help it identify high-risk carriers and drivers and to assist it in enforcing safety regulations. FRA and, to a lesser extent FHWA, have similar programs to target oversight or enforcement based on collected information. For example, many of FHWA’s division offices conduct risk assessments and use this information to target their oversight efforts for highway projects. DOJ might have difficulty collecting information similar to the DOT modal administrations because there are no ADA reporting requirements for most of the public and private entities over which DOJ has enforcement authority. One example, introduced earlier, is that many municipalities are required to develop transition plans about improving rights-of-way access but are not required to report this information. DOJ officials said that, based on their experience with Project Civic Access reviews conducted so far, most municipalities did not have a transition plan in place. However, this information is not specific enough to help DOJ target future entities to review. In general, DOT’s modal administrations attempt to resolve instances of noncompliance informally by working with the offending entity to achieve a mutually satisfactory result. If these efforts are not successful, there are two enforcement options available: withholding federal funds or referring cases to DOJ for investigation and further enforcement action. DOT has rarely used these options, however. DOT regulations encourage resolving complaints and compliance issues informally before initiating stronger methods. We found informal ADA resolution processes in use at most DOT modal administrations, but not all, as follows: FTA and FHWA officials told us that they are generally successful in working with grantees to achieve compliance, usually by developing a list of problems and providing technical assistance. For example, if FTA identifies a deficiency in the course of a triennial or compliance review, FTA requires the entity to take steps to correct the deficiency and monitors its progress. FTA keeps reviews open until problems are resolved, which could occur quickly or take years. For example, entities sometimes refuse to comply due to competing priorities for funds, lack of expertise, or other reasons. In those instances, FTA continues to try to work with the entity. In the case of one transit agency, for example, FTA completed a compliance review in January 2001 and has been monitoring the agency on a quarterly basis since that time. For public rights-of-way, FHWA seeks ADA compliance through the investigation and resolution of complaints through a settlement agreement. FHWA also approves state standards and reviews projects constructed or programs funded with FHWA funding, training, and technical assistance. For Amtrak, FRA has entered into voluntary compliance agreements in some instances. For example, Amtrak and FRA signed a compliance agreement in which Amtrak agreed to develop ADA-related training after FRA had investigated a complaint from a customer who alleged poor treatment on the basis of his disability. However, FRA investigates few complaints about Amtrak because most complaints are forwarded to Amtrak for resolution. Although FMCSA uses informal resolution methods for its safety oversight activities, it does not do so for ADA. FMCSA recently introduced a proposal to add ADA items to its safety audit of new commercial bus companies, but this would be for educational purposes and would not affect the outcome of the safety audit. At all modal administrations, DOT officials said they have rarely used the following two available enforcement mechanisms: Withholding funds—DOT agencies we spoke with had never used this enforcement option because, in most cases, withholding all or a portion of grant funds for noncompliance with ADA regulations is a lengthy and administratively complex process. DOT agencies are required to hold a hearing in front of, and gain approval from, the Secretary of Transportation prior to withholding funding. According to FRA and FTA, the process to withdraw any funding would not be taken lightly given its effect and the need for the Secretary to weigh all the factors involved. In addition, withholding all or a portion of a transportation provider’s funding could affect the entire transit system and the mobility of all riders, including those with disabilities. For example, for issues other than the ADA, we have previously reported that FRA has not withheld funds from Amtrak for noncompliance with grant agreements—despite the legal authority to do so—because withholding grant funds would involve large sums and could have a severe impact on Amtrak’s continued operations and the mobility of riders who depend on the service. Finally, FHWA officials said that they have never withheld federal funding because they have been able to resolve compliance violations voluntarily. Referral to DOJ—DOT modal administrations have the option of referring a case on ADA noncompliance to DOJ for enforcement action. However, to date, FHWA and FTA have each formally referred one case to DOJ. FMCSA has not formally referred any cases, although it has provided information to DOJ on possible ADA violations, as previously mentioned. An FTA official said that, prior to implementing the memorandum of understanding, FTA did not have the formal working relationship necessary to provide an avenue for regular communication about ongoing cases. FTA officials also indicated that DOJ investigations can be lengthy and said there are a number of steps that FTA has to pursue internally before referring a case. In several instances, however, FTA collected sufficient proof of persistent noncompliance and indicated to the grantee its intent to refer the case to DOJ, according to FTA officials. In each instance, according to FTA, grantees have then indicated willingness to make additional improvements, negating the need for a referral at that time. DOJ’s enforcement options are also somewhat limited, unless the transportation entity is privately owned. For public transportation entities, DOJ can pursue enforcement action if DOT refers the entity and, in such cases, DOJ can initiate a lawsuit, seek mediation, or negotiate a consent agreement. As mentioned previously, DOT has referred two cases formally to DOJ for investigation. DOJ can also intervene in existing private suits. For example, DOJ joined a private suit against a large city and reached a consent agreement in which the city agreed to address alleged ADA violations involving its fixed-route public bus systems. For private transportation entities, DOJ can, and has, initiated its own lawsuits, joined existing private lawsuits, used mediation, signed settlement agreements, and sought civil penalties. For example, DOJ reached a consent decree with a private entity providing fixed-route service between Memphis and the Little Rock airport, alleging that it had failed to provide accessible transportation. In another example, DOJ reached a settlement agreement with a large, door-to-door airport shuttle company in which the company agreed to add accessible vehicles to its fleet, train its employees on providing equivalent service, and pay a civil penalty. DOJ officials said that they may increase their use of civil penalties for ADA violations in the future because the ADA has been in effect for 17 years and entities should be familiar with their responsibilities. In contrast to surface transportation cases involving the ADA, DOT has at least one other option, the ability to levy monetary penalties, available for enforcement in similar situations. Following are examples of monetary penalties: DOT has the ability to levy monetary penalties against airlines that violate the Air Carrier Access Act of 1986, which largely governs accessibility issues in air transportation. DOT has levied penalties against commercial air carriers for violations of this law and has allowed carriers to use a portion of the penalties to improve their compliance. For example, in 2002, DOT found that Northwest Airlines had violated the Air Carrier Access Act and assessed civil penalties of $700,000 with certain provisions that allowed the airline to offset a portion of the penalties. In this case, Northwest could offset up to $550,000 by taking steps such as increasing the number of wheelchair assistance personnel at airports, purchasing and installing grab bars in airplane lavatories, and establishing an Air Carrier Access Act Quality Assurance Program. Between 2000 and 2006, DOT imposed approximately $8.4 million in penalties. Such penalties are also an option for many safety violations. FRA and FMCSA impose civil penalties against freight rail and commercial motor carriers, respectively, for safety violations, and FTA and OST officials said that extending this type of enforcement tool to FTA for use against transit agencies would be very useful and would help their ADA compliance efforts. Agency officials indicated the threat of a fine would serve to encourage compliance but would also be useful to gain compliance for relatively minor acts of noncompliance. For example, FTA officials said that during the course of investigating a complaint against a transit agency, the agency agreed there was a problem but refused to correct it. The transit agency understood the problem was a small one and that it was unlikely that FTA would pursue one of the more extreme enforcement options available. However, if FTA were able to levy a fine for this particular instance, the transit agency would be much more likely to comply. In a number of instances, compliance has come not through agency enforcement but through private citizens filing lawsuits and negotiating settlements. The ADA authorizes private citizens or their representatives to file suit in cases of discrimination, providing another avenue of oversight for both public and private entities where federal oversight has not resolved problems. In addition, citizens are not required to pursue resolution through complaints prior to filing suit. Lawsuits are not without limitations, however. For example, the ADA does not provide for punitive damages. Also, although the ADA does allow for recovery for legal fees, recent court decisions have made these fees more difficult to obtain. The terms of lawsuits and settlement agreements reached by people with disabilities have resulted in more than just requiring transportation providers and state and local governments to conform to the requirements of the ADA. For example, a group of passengers in Boston brought suit against the Massachusetts Bay Transportation Authority in 2002 alleging discrimination based on disability. The passengers and the transit agency eventually reached a settlement agreement that includes a commitment by the agency to ensure bus lifts are properly maintained and functional, as required by ADA regulations, and also a pledge to purchase new low-floor (rather than high-floor) buses that employ ramps instead of lifts—lifts are often deemed to be less reliable. Notably, FTA has been monitoring Massachusetts Bay Transportation Authority for compliance with ADA requirements to announce transit stops and maintain bus lifts since July 2000. The ADA requires DOT, DOJ, and the Access Board to provide technical assistance that will help transportation providers, businesses, and state and local governments comply with ADA requirements. The agencies have provided this assistance both in regulations and in various types of nonregulatory guidance. Our discussions with officials from state and local transportation agencies indicated, however, that current assistance has several key gaps and that—in some instances—proposed regulations and guidance still leave questions about what they need to do to comply. DOJ and DOT each issue regulations covering those aspects of the ADA for which they are responsible. These regulations, discussed below, have the force and effect of law. DOJ’s regulations incorporate the Access Board’s guidelines as standards for accessible design. The regulations provide minimum design standards for the construction and alteration of places of public accommodation, commercial facilities, and state and local government facilities. Included in these standards are basic design criteria for sidewalks and curb ramps. DOJ’s regulatory standards must, at a minimum, meet the Access Board’s accessible design guidelines. DOJ also issues regulations on nondiscrimination on the basis of disability by public accommodations and in commercial facilities, as well as nondiscrimination on the basis of disability in state and local government services. DOT’s regulations focus on the provision of transportation services by public and private entities and include accessibility requirements as they pertain to vehicles (such as public transit, intercity passenger trains, and commercial buses) and stations. Under the ADA, DOT’s regulatory standards for accessible facilities and vehicles cannot be less stringent than the Access Board’s guidelines. DOT’s regulations also cover nondiscrimination (for example, an entity cannot require that a qualified individual with a disability be accompanied by an attendant) and requirements for complementary paratransit service, such as processes for determining eligibility. DOJ, DOT, and the Access Board also issue official guidance. This guidance does not have the force and effect of law and is intended to provide clarification to assist entities in complying with regulations. For example, DOJ guidance includes information for businesses on accommodating service animals and restriping parking lots, among other things. FTA has issued guidance to assist public transportation agencies in their responsibility to transport passengers who use common wheelchairs. The Access Board has provided guidance to clarify technical requirements for buses, commuter and intercity railcars, and over-the-road bus systems. To coordinate DOT’s disability-related interpretations, guidance, and policies, the Secretary of Transportation established in 2003 a working group known as the Disability Law Coordinating Council. DOT recently proposed codifying the council in regulation. For more information about the council, see appendix III. DOJ, DOT, and the Access Board all provide technical assistance through a variety of other sources, such as Web sites, conferences, and outreach through nongovernmental entities (see table 1 for examples). These other informational sources provide state, local, and industry officials with a source of information ranging from the regulations themselves to one-on- one assistance with specific questions. On FMCSA’s Web site, for example, commercial bus companies can obtain a summary of DOT’s ADA regulations and information about their annual reporting requirements. Finally, other federal and nongovernmental organizations not specifically named under the ADA also provide technical assistance. For example, The Department of Education funds Disability and Business Technical Assistance Centers, which provide training related to ADA. The Department of Health and Human Services supports a nationwide system of state-level organizations that advocate for the rights of individuals with disabilities. The American Bus Association, an industry organization, provides a newsletter to its members addressing ADA-related topics and requirements. Advocacy organizations such as the Paralyzed Veterans of America and the National Disability Rights Network inform transportation providers and individuals with disabilities about ADA rights and responsibilities. While a number of public transportation providers and state and local officials with whom we spoke found federal technical assistance sufficient for many of their needs, they identified two key areas in which confusion existed about complying with ADA requirements. These areas were (1) uncertainty about how ADA requirements pertain to emerging issues in public transportation, such as mobility devices that do not fit the definition of a common wheelchair, and (2) lack of clarity about planning for and designing accessible public rights-of-way. According to some state and local government officials, this uncertainty has made them apprehensive about going forward with efforts to implement accessible rights-of-way, particularly those that go beyond the current ADA regulations such as installing accessible pedestrian signals. DOT is in the process of updating guidance on the emerging issues in public transportation. For public rights-of-way, however, federal agencies are not as far along in addressing areas of confusion. DOT has identified emerging areas in public transportation that it is addressing through an NPRM and anticipates finalizing the rule by the beginning of 2008. These issues include the increasing use of larger, heavier mobility devices on public transportation and the potential effect on DOT’s current definition for a common wheelchair; requirements for public transit agencies providing paratransit services; and platform requirements for intercity and commuter rail stations. Prior to issuing the NPRM, DOT promulgated guidance on these issues in 2005; however, a number of public transportation providers and national industry groups with whom we spoke noted that the industry was unsure about how to implement some of the guidance. For example, as more people are using larger wheelchairs or scooters and similar devices, public transportation providers with whom we spoke are unclear about how to accommodate these devices because current regulations on wheelchairs and mobility devices do not address devices that fall outside of the definition of a common wheelchair. Further, a number of transportation providers considered DOT’s 2005 guidance on how transit vehicles should transport two-wheeled, self-balancing Segway® personal transportation devices, to be unclear. Specifically, DOT guidance states that a transportation provider is not required to permit anyone to bring onto a vehicle a device that is too big or that is determined to pose a direct threat to the safety of others; however, the guidance also directs transportation providers to accommodate Segways when used as a mobility device by a person with a disability, subject to these same limitations. Thus, to address these concerns, and others, the DOT issued an NPRM soliciting public comment on this topic, as well as on paratransit services and level boarding for rail station platforms. Advocacy and industry groups and state and local governments told us that current federal regulations and guidance have gaps or are unclear on (1) ADA-required transition plans for assessing the accessibility of state and local governments’ structures including sidewalks and curb ramps and (2) technical requirements for installing accessible public rights-of-way. Many Jurisdictions Lack Information about Transition Plans for Correcting Public Rights-of-Way Deficiencies or Are Unaware They Have to Develop a Plan ADA regulations require state and local governments to assess local accessibility and draft a transition plan for upgrading the public rights-of- way within their jurisdictions. Current regulations require any public entity that employs 50 or more persons to develop such a plan. If a public entity has responsibility or authority over streets, roads, or walkways, its transition plan must include a schedule for providing curb ramps, or other sloped areas, where pedestrian walks cross curbs, including state and local government offices and facilities, transportation, and places of public accommodation. At a minimum, the plan must identify physical obstacles that might limit the accessibility of programs or activities, describe in detail the methods that will be used to make facilities accessible, specify the schedule for taking identified steps, and indicate the official responsible for implementing the plan. However, gaps exist in the current federal regulations and guidance because they do not specify how to include that information in the plans and, if a jurisdiction has a plan, when it should update the plan. The American Association of State Highway and Transportation Officials surveyed state departments of transportation and concluded that considerable confusion exists among states about when and how to update transition plans. In addition, several members of an industry association (representing different states and localities) told us that jurisdictions are confused about what is supposed to be included in a transition plan and indicated that more specific federal guidance would be helpful. For example, one state transportation official mentioned that federal guidance was unclear on what data should be collected for ADA transition plans and did not address field-level implementation of ADA requirements for transition plans. Without proper regulations and accompanying guidance from the federal government, states and localities face challenges creating these plans, or may not create them at all. DOJ Project Civic Access reviews typically reveal that, most commonly, the responsible government has not established an ADA transition plan and the accompanying policies and procedures necessary to ensure the installation of curb ramps at public rights-of-way. Absent such plans, states and localities may neither assess the status of the accessibility of their public rights-of-way nor develop a schedule for updating curb ramps and ensuring access to public services and programs, leaving themselves vulnerable to private lawsuits or federal compliance actions. Furthermore, without transition plans, it is difficult or impossible for the federal government to assess compliance and collect information or data from state and local governments with regard to the accessibility of their public rights-of-way. FHWA has recognized the lack of information on ADA-required transition plans and other aspects of civil rights requirements and plans to complete civil rights program assessments of all state departments of transportation by the end of fiscal year 2008. This project should, among other things, enable FHWA to gauge the number of states that have developed and implemented a transition plan. The program assessments are designed to assess how state departments of transportation implement ADA requirements and ascertain the extent to which they are involved with local governments’ ADA implementation on projects and programs that are jointly funded by FHWA and a state department of transportation. While these program assessments are a first step, FHWA will not assess the content of state transition plans or determine whether the state transportation agencies are in compliance with the ADA. The assessments will also not address whether local governments throughout the country have created transition plans. FHWA has also drafted a tool kit for its division offices and state departments of transportation. The tool kit will assist staff tasked with compliance and oversight activities for ADA requirements, including oversight of transition plans for state departments of transportation. According to FHWA, this tool kit is under review by FHWA’s Office of Chief Counsel and is not yet available publicly. In addition, FHWA is involved in a federally funded research project by the National Cooperative Highway Research Program focusing on the development of a guide for updating ADA transition plans for state departments of transportation. This project is aimed at helping states translate applicable laws and guidance into field-level implementation of ADA requirements for transition plans and related requirements and is anticipated to be completed in May 2008. Technical Standards for Installing Public Rights-of-Way Are Not Finalized In addition to the transition plans required by the ADA, the Access Board developed ADA Accessibility Guidelines (ADAAG) for installing accessible structures and devices such as curb ramps for sidewalks. These guidelines serve as the basis for DOJ and DOT’s current ADA regulations, originally published in 1991. However, ADA accessibility requirements in current regulations focus primarily on accessibility standards for building facilities, not public rights-of-way. In June 1994, the Access Board published an interim rule containing more information on public rights-of- way, among other accessibility topics, to supplement the ADA accessibility requirements. As the transportation community and others reviewed these guidelines, however, they were concerned about the magnitude of the work that would be needed to meet the public rights-of-way guidance. As a result, the Access Board withdrew the sections of the rule pertaining to public rights-of-way and began conducting education and outreach activities to inform the transportation industry about accessibility of public rights-of-way. Current ADA accessibility requirements, as codified in regulation, do not contain the Access Board supplement on public rights-of-way. In 1999, the Access Board resumed its efforts to develop final guidelines for public rights-of-way and, nearly a decade later, work continues on these draft guidelines. After soliciting input from a wide variety of stakeholders, the Access Board released another draft of its public rights- of-way guidelines in 2002 for public comment and received an extensive public response. The board considered these comments and, in 2005, published revised draft guidelines for purposes of gathering additional information for an economic impact analysis, which is still under way by the Access Board. The new guidelines are expected to cover such subjects as pedestrian access to sidewalks and streets, including crosswalks, curb ramps, street furnishings, pedestrian signals, parking, and other parts of the public rights-of-way. They will likely also address issues such as access at street crossings for pedestrians who are blind or have low vision, wheelchair access to on-street parking, and constraints posed by space limitations, roadway design practices, slope, and terrain. According to Access Board and DOJ officials, the draft guidelines are more consistent with industry standards. The draft guidelines remain a work in progress. The Access Board is still working on the economic analysis, and, once it is complete, the draft guidelines will go out for public comment. As of July 2007, however, the Access Board was not able to provide an estimate for when the guidelines might be finalized. If codified into federal regulations and standards by DOJ, the Access Board draft guidelines would supplement the current ADA accessibility requirements and provide a comprehensive set of regulations for public rights-of-way. Various studies and advocacy and industry groups, as well as officials with whom we spoke, cited the lack of final, specialized standards for public rights-of-way as a problem. Some of their comments and findings are as follows: According to a report by the National Academies of Sciences, improvements to pedestrian accessibility have lagged behind improvements to the rest of the transportation network, in part because no enforceable regulations for making public rights-of-way accessible have been issued. Officials with the National Council on Disabilities said that, absent such enforceable standards, localities continue to erect barriers, such as inaccessible bus stops, intersections without curb ramps or with improperly constructed curb ramps, and barriers blocking sidewalks. Officials with a national industry association with whom we spoke said that localities are uncertain about requirements for and definitions of accessible pedestrian signals. The officials said that there is a strong bias for localities to delay in adding pedestrian signals, depending on what the final guidelines will require. For example, one city is conducting a major construction project downtown to add light rail. In the course of this construction, 60 pedestrian signals will be modified, but the city is unsure how to proceed since accessible pedestrian signals are not defined or covered in current ADA requirements. Industry groups with whom we spoke noted that states and localities may not make an investment in accessibility improvements for public rights-of- way that go beyond current regulations for curb ramps, since draft guidelines will likely change. Furthermore, officials with whom we spoke identified aspects of current accessibility requirements that are not clear, such as detectable warning requirements for curb ramps. Additionally, industry and advocacy groups and state and local governments said that differences between the draft guidelines, current ADA accessibility requirements, other federal guidelines, and national and state building codes create challenges for state and local governments that are trying to comply with applicable accessibility requirements for public rights-of-way. State and local government officials, as well as officials from advocacy and industry groups, pointed to the lack of finalized comprehensive standards for public rights-of-way as an obstacle to ensuring access to transportation for individuals with disabilities. FHWA, which implements ADA pedestrian access requirements for federal, state, and local government agencies that build and maintain highways, has provided some guidance, but FHWA officials acknowledge that the effectiveness of the guidance is limited. Furthermore, FHWA directs states and localities to use the Access Board’s draft guidelines as best practices. In the absence of finalized comprehensive standards for public rights-of-way, DOJ and the Access Board have developed guidance on these issues. For example, DOJ has developed an online tool kit for state and local governments to use in identifying and fixing problems in public rights-of-way accessibility. However, according to federal officials, it is difficult to provide effective training and technical assistance for states and localities while Access Board draft guidelines are not final and codified in regulation. Federal officials have acknowledged that the draft guidelines will likely change as a result of the rulemaking process. Congress passed the ADA in part to help people with disabilities have access to transportation, but 17 years later the federal government cannot determine the extent of its success for many transportation modes due to a lack of reliable data. While some improvements have been made in surface transportation accessibility, further advances are also hindered, in part, by confusion among transportation providers and local governments about some of the more complex and emerging aspects of accessibility requirements and among federal agencies about their respective roles and responsibilities. For state and local governments, a major source of confusion is the ADA’s requirement to develop and update transition plans that inventory the accessibility of public rights-of-way and identify steps and time frames for addressing deficiencies. Industry associations and state and local transportation agencies that we interviewed were unsure what should be included in the plan, what a successful plan would look like, and how often to update the plan. The problem is persistent enough that the National Cooperative Highway Research Program, which is funded by FHWA and state transportation agencies, is conducting a study to develop a tool to help state transportation agencies with these plans. FHWA is also conducting program assessments of state transportation agencies to determine whether they have completed transition plans. There is also confusion among DOT’s modal administrations about what steps DOT is able to take to enforce the ADA. DOT established a Disability Law Coordinating Council to coordinate the agency’s disability-related guidance and policies, but this mission does not include coordination of oversight and enforcement efforts. FTA and DOJ crafted a memorandum of understanding that set out their respective responsibilities for shared enforcement of the ADA, and this was successful in that it helped develop working relationships that have furthered oversight and enforcement of accessibility requirements in public transportation. However, FMCSA does not conduct ADA compliance reviews or investigate complaints for commercial buses and has indicated that it cannot withhold or revoke a company’s operating authority for noncompliance with the ADA. A federal court directed FMCSA to reexamine the statute for further consideration. In addition, although FMCSA and DOT’s Office of the Secretary have not gathered and reviewed information on the accessibility of demand- response and fixed-route commercial bus service and determined whether to retain or modify the ADA regulations governing such buses, as required, they recently developed a preliminary strategy for doing so in response to our preliminary findings. FRA also has had limited involvement in ADA enforcement and has not conducted periodic compliance reviews of Amtrak, as required by regulation, but FRA officials indicated that they may do so in the future. Amtrak’s delay in conducting station assessments, including providing information on the steps necessary to bring them into compliance with the ADA by July 2010, hinders FRA’s ability to adequately oversee intercity passenger rail accessibility. When DOT does identify ADA violations—whether by local transit agencies, Amtrak, or other entities—DOT primarily relies on informal negotiations and reminders to attempt to obtain compliance with the ADA. In many cases, these informal methods are sufficient to correct the problem. Sometimes, however, an entity refuses to comply due to competing priorities for funds, lack of expertise, or other reasons. The ADA has been in effect for more than 17 years, and federal officials are less sympathetic to such reasons than they used to be. Other than the informal methods, DOT’s other enforcement options are withholding grant funds or pursuing litigation through DOJ. However, DOT has rarely used these options because they are too drastic or lengthy to effectively address the problem in many instances. There is very little middle ground available. Civil penalties are a tool that DOT uses to achieve other goals, but it does not have authority to use them for ADA violations. DOT’s Office of the Secretary already has experience in administering civil penalties against air carriers for violations of the Air Carrier Access Act. Likewise, FRA and FMCSA impose civil penalties against freight rail and commercial motor carriers, respectively, for safety violations. Similar authority for ADA violations would give DOT’s oversight and enforcement efforts more weight and help ensure that accessibility is a higher priority for public and private surface transportation providers and local governments. To improve the availability of data on ADA compliance and improve FRA’s ability to oversee Amtrak’s progress in implementing the ADA, we recommend that the President of Amtrak continue to report to FRA on the status of Amtrak’s review of the accessibility of its stations. As required by Amtrak’s fiscal year 2006 and 2007 grant agreements, this report should include data for each station and actions required to bring it into compliance, as well as an overall schedule for bringing all Amtrak stations into compliance. Given gaps in data on the status of ADA compliance of commercial buses, we recommend that the Secretary of Transportation direct the Administrator, FMCSA and DOT’s Office of the Secretary, to implement their plan to gather, review, and verify information on demand-response and fixed-route commercial bus service and determine whether to retain or modify the existing regulations, as required by DOT’s regulations. To reduce confusion among state and local entities regarding ADA- required transition plans, we recommend that the Secretary of Transportation direct the Administrator, FHWA, to work with DOJ to use the results of both FHWA’s program assessments and the National Cooperative Highway Research Program’s study to develop and disseminate guidance for creating and updating transition plans. To enhance DOT’s oversight of ADA compliance, we recommend that the Secretary of Transportation take the following two actions: develop criteria for determining circumstances under which DOT would withhold all or part of a grantee’s federal funds for instances of ADA noncompliance, which could streamline the process, and direct the Administrator, FRA, to conduct the periodic reviews of Amtrak’s ADA compliance that are required by regulation. To increase coordination and communication among DOT’s modal administrations and with DOJ, thereby improving DOT’s ability to oversee and enforce the ADA, we recommend that the Secretary of Transportation direct the Administrators of FHWA, FMCSA, and FRA to enter into formal agreements with DOJ to clearly delineate responsibility for enforcing the provisions of the ADA pertaining to surface transportation and public rights-of-way. Furthermore, we recommend that the Secretary of Transportation, through the Office of the Secretary, establish or designate a formal working group or other coordinating body (such as the Disability Law Coordinating Council) to ensure a coordinated effort within DOT for overseeing and enforcing the ADA, including identifying ways to improve data for measuring compliance. To expand the range of options available to DOT modal administrations for enforcing the ADA for surface transportation and public rights-of-way, we recommend that the Secretary of Transportation develop a legislative proposal that would give DOT the authority to impose civil penalties for ADA violations. We provided a draft of this report to DOT, DOJ, the Access Board, and Amtrak for their review and comment. DOT and DOJ provided oral comments and agreed with our findings and conclusions. Further, DOJ agreed with our recommendations, and DOT agreed to consider them. The Access Board provided oral comments and agreed with the report’s findings. Amtrak provided written comments (see app. IV) and stated that our recommendations regarding enhancing DOT’s oversight and enforcement options would not be effective in cases where federal guidance was unclear and funding is not available to meet the technical requirements. DOT, DOJ, and Amtrak also provided technical comments via e-mail, which we incorporated throughout the report as appropriate. Specific comments on the report as well as our responses follow. DOT officials stated that since they had an existing body, the Disability Law Coordinating Council, to coordinate department regulations, they said that the council’s mission could potentially be expanded to coordinate oversight and enforcement of the ADA. We included the council in the recommendations. DOJ officials asked that we clarify DOJ and DOT’s statutory and regulatory authority and they provided additional examples of DOJ’s activities in ADA enforcement. We made changes to reflect these comments. Finally, Amtrak stated its commitment to making its railcars and stations accessible to passengers with disabilities and compliant with the ADA. It also delineated three concerns impeding and increasing the cost of Amtrak’s progress in constructing and renovating stations. First, Amtrak officials indicated DOT’s notice of proposed rulemaking on platform heights could require considerable changes to platform design, but they are uncertain of when these rules will become final and, if they do, how the entities affected—including freight railroads—will be able to address these requirements. Second, they indicated the proposed rules are unclear regarding who is responsible for ADA compliance in areas where different public and private entities own stations. Finally, they stated these potential requirements are expensive, especially in the face of Amtrak’s funding difficulties. They conclude that many technical, ownership, and funding issues are involved in addressing ADA compliance. Thus, our recommendations that DOT clarify situations under which it can withhold grant funds and consider asking for the ability to assess civil penalties are likely to be ineffective for Amtrak without more funding and clearer federal requirements. We added further information clarifying Amtrak’s difficulties in the report. We did not revise our recommendations since they apply to many situations beyond this one, such as commercial buses and public transit. Also, we believe that additional data and federal oversight of all modes of surface transportation, including Amtrak, would be beneficial in ensuring continued progress in meeting the accessibility goals of the ADA. We are sending copies of this report to interested congressional committees, the Secretary of Transportation, the Attorney General, and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-2834 or siggerudk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. The state and local transportation providers and government agencies that we interviewed said that they used a variety of federal, state, and local funding sources—as well as farebox revenues—to help them comply with the surface transportation provisions of the Americans with Disabilities Act of 1990 (ADA). The federal funding sources are listed in table 2. In addition, DOT recently implemented the New Freedom Program, which is a formula grant program designed to support new public transportation services and public transportation alternatives beyond those required by the ADA. Congress apportioned $81 million for this program for fiscal year 2007. This is a new program, and we reported in July 2007 that few governors had designated entities to receive the funds, and FTA had awarded few grants to date. This report addresses the following three objectives: (1) what is known about the extent of Americans with Disabilities Act of 1990 (ADA) compliance for surface transportation and public rights-of-way, (2) what difficulties, if any, the federal government faces in overseeing and enforcing compliance with the ADA, and (3) the sources of federal technical assistance that are available to help public transportation providers, businesses, and state and local governments comply with ADA requirements and what gaps, if any, exist. Surface transportation, for the purposes of this report, includes public transportation (such as buses, subways, trolleys, and commuter rail), ADA-complementary paratransit (provided within 3/4 of a mile of a bus route or rail station, at the same hours and days as fixed-route transit, for no more than twice the regular fixed-route fare), intercity passenger rail (National Railroad Passenger Corporation, known as Amtrak), intercity buses, and privately operated transportation that is open to the public (such as taxis and airport shuttles). Maritime and aviation are excluded from our scope, as are school transportation and the Alaska Railroad. To describe what is known about the extent of ADA compliance for surface transportation and public rights-of-way, we reviewed and analyzed relevant portions of the ADA, as well as related federal regulations and guidance. We also reviewed the literature on transportation accessibility, such as the National Council on Disability’s reports on the status of compliance with the ADA, and interviewed federal officials from the U.S. Architectural and Transportation Barriers Compliance Board (Access Board); the U.S. Department of Justice’s (DOJ) Civil Rights Division; and the U.S. Department of Transportation’s (DOT) Office of Civil Rights and modal administrations, including the Federal Highway Administration, Federal Motor Carrier Safety Administration, Federal Railroad Administration, and Federal Transit Administration. In addition, we interviewed officials from the National Council on Disability and Amtrak. We obtained data from Amtrak and the Federal Transit Administration’s National Transit Database on the number of accessible vehicles and stations. To assess the reliability of these data, we spoke with agency officials about data quality control procedures and reviewed relevant documentation. We determined the data were sufficiently reliable for the purposes of this report. We also obtained accessibility data from reports by DOT’s Bureau of Transportation Statistics and the National Council on Disability, as well as from the National Organization on Disability’s 2004 Harris Survey. Given that these data were used for background purposes, we did not assess their reliability. To identify any difficulties the federal government faces in overseeing and enforcing compliance with the ADA, we interviewed Access Board, DOJ, and DOT officials (including officials from one of the Federal Transit Administration’s regional offices) and analyzed documentation regarding oversight requirements and activities, including information on the type and frequency of activity, processes by which entities are selected for review or investigation, and resulting enforcement activities, if applicable, as well as the processes for receiving, processing, and responding to complaints. We also obtained and analyzed DOJ and DOT’s ADA-related complaint data. In addition, we reviewed DOJ and DOT’s annual reports, strategic and performance plans, and other related documents to identify agency and program goals, performance targets, and data collected for performance indicators related to improving ADA compliance. To describe the sources of available federal technical assistance and determine whether any gaps exist, we interviewed and obtained documentation from Access Board, DOJ, and DOT officials and key technical assistance providers (such as Easter Seals Project ACTION). We also obtained and analyzed information on the processes by which federal agencies determine how to target this assistance. To address all three of the objectives, we also interviewed 14 national industry associations and disability organizations (see table 3) to obtain their perspective on what is known about ADA compliance; federal technical assistance, including any potential gaps in such assistance; and federal ADA-related oversight and enforcement activities. To illustrate experiences that transportation providers and state and local governments have had with federal ADA-related technical assistance and oversight and enforcement activities, we supplemented the information from our federal interviews and documentation with interviews with officials in eight cities. The interviews included officials from 2 state departments of transportation, 11 local transportation agencies, 6 private transportation providers, 4 local governments, 4 centers for independent living, 2 technical assistance centers, and 2 local disability advocacy groups. We selected the eight cities to obtain diversity in the following criteria: Experience with federal ADA oversight and enforcement processes—We identified cities in which public transportation providers or government entities had been subject to federal oversight and enforcement processes, including FTA compliance reviews and DOJ Project Civic Access reviews. We also included transportation providers (public and private) or government entities listed in DOJ’s complaint database, those with whom DOJ had negotiated a consent decree or settlement agreement, or those whom FTA had investigated in response to a complaint and issued a letter of finding. Population—We selected a mixture of urbanized areas with very large populations (greater than 1 million), large populations (200,000-1 million), and small populations (50,000-199,000), as defined by FTA. Geographic diversity—We selected cities from around the United States. Other criteria—We also selected cities involved in additional transportation accessibility areas, including both National Organization on Disability Accessible America Award winners or runners-up in 2005 and 2006, and parties to private lawsuits identified through Internet searches, ADA-related literature, and our federal and national interviews. Table 4 lists the eight cities that we selected on the basis of these criteria and the agencies that we interviewed. The results of these interviews cannot be used to make inferences about the entire population because the cities were selected from a nongeneralizable sample. However, we determined that the selection of these cities was appropriate for our design and objectives and that the selection would generate valid and reliable evidence to support our work. We conducted this performance audit from November 2006 through July 2007 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In March 2003, the Secretary of Transportation established a working group known as the Disability Law Coordinating Council to coordinate the Department of Transportation’s (DOT) disability-related interpretations, guidance, and policies. The council is led by the Office of General Counsel and includes representatives from the Federal Highway Administration, Federal Motor Carrier Safety Administration, Federal Railroad Administration, Federal Transit Administration, and National Highway Traffic Safety Administration. Its purpose, according to DOT officials, is to coordinate DOT’s disability- related regulations and ensure that guidance and interpretations are consistent among DOT offices and consistent with DOT regulations that implement the Americans with Disabilities Act of 1990 (ADA), among other acts. It meets once a month for members to discuss what each modal administration is doing, uncertainties or questions that have arisen and where additional guidance would be useful. The council conducts its business informally, without formal agendas, minutes, or notes from its meetings. DOT proposed to codify the council’s role in its February 2006 notice of proposed rulemaking. DOT states that the proposed regulatory change would codify DOT’s procedure with regard to the council and provide better notice to the public regarding the council’s actions. The proposal has generated some controversy, however. For example, one major industry association has expressed concern that DOT’s proposal does not discuss what authority the council would have to interpret the ADA and implement regulations and what balance would be struck between the council’s and FTA’s authority. Appendix IV: Comments from the National Railroad Passenger Corporation (Amtrak) In addition to the individual named above, other key contributors to this report were Catherine Colwell, Assistant Director; Ashley Alley; Jean Cook; Catherine Kim; Jessica Lucas-Judy; Stan Stenersen; and Travis Thomson.
The Americans with Disabilities Act of 1990 (ADA) provides people with disabilities the legal right to access transportation and public rights-of-way, including sidewalks and street crossings. The Department of Transportation (DOT) and Department of Justice (DOJ) share responsibility for overseeing ADA compliance. GAO was asked to review federal oversight and enforcement of ADA compliance, including (1) what is known about compliance, (2) difficulties the federal government faces in overseeing and enforcing compliance, and (3) the sources of federal help and any gaps in that help. GAO's work encompassed a wide range of federal agencies and other entities, such as industry associations, transportation providers, and disability advocacy groups, as well as detailed reviews in eight cities across the country. While data indicate accessibility is improving for public transit, the extent of ADA compliance for other modes of transportation and public rights-of-way is unknown due to the lack of reliable data. For example, there are no national data on compliance with requirements for ADA paratransit--transit service that complements bus or rail transit. The Federal Motor Carrier Safety Administration (FMCSA) solicits compliance data from registered commercial bus companies, but the response rate is low (13 percent in 2006), and the agency has not verified or analyzed the data. In other instances, such as the accessibility of Amtrak's train stations, data are still being developed. Federal agencies face three main difficulties overseeing and enforcing compliance. First, they differ greatly in the degree to which they have an oversight framework in place. For example, the Federal Transit Administration (FTA) has a memorandum of understanding in place with DOJ specifying each agency's responsibilities for public transit, while the Federal Railroad Administration (FRA) and Federal Motor Carrier Safety Administration have no formal mechanism for coordinating with DOJ. Second, federal agencies' lack of data about compliance limits DOT's ability to target its oversight and enforcement efforts. Only the Federal Transit Administration uses data in this manner. Third, DOT officials regard their enforcement options, such as withholding grant money, as lengthy and complex processes that would not be undertaken lightly. DOT officials said the authority to impose fines--an option they lack--would be more useful. Federal agencies provide a variety of technical assistance to help entities comply with the ADA, but gaps in regulations and guidance exist. For example, one gap involves a requirement for local governments to develop plans for identifying and correcting accessibility problems with public rights-of-way. As a result, GAO found confusion about which entities needed to develop the plans and how to use and update plans once they were developed. DOJ officials said most localities had not developed such plans, leaving themselves open to private lawsuits and federal enforcement action.
Given the scope and variety of federal activities, the federal budget is inevitably complex. This is particularly seen in the federal budgetary treatment of receipts. The 1967 President’s Commission on Budget Concepts recommended a dual system of accounting for federal receipts. The Commission recommended that receipts from activities which were essentially governmental in nature, including regulation and general taxation, be reported as receipts, and that receipts from business-type activities “offset to the expenditures to which they relate.” The Commission recommended this system so that budget totals could present a clear picture of the extent of governmental activity. In practice, however, the distinction was never sharp as evidenced by the fact that revenue from business-type transactions, termed “offsetting collections,” are not made available to agencies in the same way. Collections credited to appropriation or fund accounts go directly to expenditure accounts. Here, legislation requires that collections be credited to an appropriation or fund account and offset spending in the account without further legislative action. Collections are typically credited to revolving funds when they are the main source of financing and are permanently appropriated to fund business-like activities, such as the Postal Service. However, offsetting collections to appropriation or fund accounts are not limited to business-like or self-supporting activities. Offsetting receipts are required by law to be deposited into receipt accounts. Additional congressional action is necessary to move these into, most often, special or trust fund expenditure accounts. Offsetting receipts offset budget authority and outlays, but at a level other than at the expenditure account. The U.S. Fish and Wildlife Service’s (USFWS) migratory bird conservation activities are funded through appropriated offsetting receipts. Whatever the budget accounting, the Congress grants agencies authority to spend offsetting collections either through permanent or current appropriations. For example, the Congress permits some agencies to obligate fees credited directly to appropriation or fund accounts without further congressional action. In this report, we refer to this type of permanent authority as spending authority. In other cases, the Congress requires some agencies to obtain budget authority through current appropriations before spending offsetting receipts. In this report, we refer to this budget authority as an appropriation. To better understand differences in how offsetting collections for business-type activities are treated in the budget process and how they have fared recently, we created a universe of 27 agencies to review that rely on fees as a source of funds. We defined fee-reliant agencies using the following criteria (1) fees from the public must be used to support the agency that generated the fee, (2) services, goods, or benefits must be provided in exchange for fees and the exchange should be closely linked in time, and (3) new fees from the public must represent 20 percent or more of the agency’s gross outlays less offsetting collections from federal sources averaged over fiscal years 1991 through 1996. To identify changes in agency reliance on user fees since the passage of BEA, we used OMB actual year data to construct a series of analyses that described trends in budget authority and collections for fee-reliant agencies. Data used in this report cover fiscal years 1991 through 1996. OMB codes also allowed us to track changes in discretionary versus mandatory classifications of agency funding and shifts from current to permanent budget or spending authority. To review the classification and treatment of fees in budget accounts, we used OMB codes created for the Administration’s annual budget request. These codes allowed us to identify (1) fees from the public and their classification as an offsetting receipt or an offsetting collection credited to an appropriation or fund account, (2) the type of expenditure account the fee is credited to, and (3) the fee’s availability for obligation in a given fiscal year. To identify issues for consideration in the future design and management of such fees, we conducted interviews with budget officials at CBO, OMB and 6 of the 27 agencies we identified as fee-reliant. This report discusses user fees in a budget context and not from a financial management perspective. Issues related to reporting of fees in financial statements or compliance with standards, such as OMB Circular A-25, User Charges and the Statements of Federal Financial Accounting Concepts and Standards No. 4, Managerial Cost Accounting Standards, are not addressed in this report. Details of our scope and methodology are contained in appendix I and related GAO products are listed at the end of the report. Our work was performed in Washington, D.C. between September 1996 and September 1997 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Director of the Office of Management and Budget or his designee. On December 12, 1997, the Deputy Associate Director for Budget Analysis and Systems in the Budget Review Division provided us with comments, which are discussed in the “Agency Comments and Our Evaluation” section. The agencies we reviewed increased the overall amount of fees they collected and these fees constituted a larger proportion of their budgets in fiscal year 1996 than in fiscal year 1991. Fee collections among the 27 agencies we surveyed totaled $74.9 billion in fiscal year 1996. As noted earlier, federal user fees provided the United States government with $196.4 billion in revenues in fiscal year 1996. Of this total, Congress has earmarked $154.3 billion to the agencies that generated the fees, while $42.1 billion was not earmarked to specific agencies but was credited to the general fund of the Treasury. Fees collected by the 27 agencies in our review represented 49 percent of the $154.3 billion in earmarked user fees from the public in fiscal year 1996. In fiscal year 1991, these 27 agencies collected $58 billion from the public in user fees to support their activities. By fiscal year 1996, this amount had grown to $74.9 billion. Although these figures are dominated by the Postal Service, which accounted for $13 billion of the $17 billion increase, collections increased 27 percent in real terms between these years.Between 1991 and 1996, all agencies we studied either increased or roughly maintained the percent of their budgets funded though user fees. During this time period, the Congress substantially increased the fee-reliance of some regulatory agencies, such as the Federal Trade Commission, the Federal Communications Commission, and the Securities and Exchange Commission. Additional user fee collections appear to have replaced appropriated funds or to have reduced the size of decreases in appropriated funds. Replacement of general fund appropriations can also be seen in that most increases in user fees enacted between fiscal years 1991 and 1996 were designated as discretionary spending for BEA purposes. The classification of these fees and increased use of fee collections to offset discretionary spending has lessened the impact of BEA spending limits on agencies that collect fees. As shown in table 1, 8 out of 27 agencies in our survey showed increased reliance on user fees from fiscal year 1991 to fiscal year 1996. Of the 27 agencies in our review, 15 were fully funded, or nearly so, by fees from the public in fiscal year 1991 and remained so through fiscal year 1996 according to budgetary data. Of the remaining 12 agencies, 6 substantially increased their reliance on fees from fiscal years 1991 to 1996. Congress increased fee reliance of two additional agencies, though not to the extent of the agencies noted above. Four agencies saw the percentage of their budgets funded through user fees remain stable between fiscal year 1991 and 1996. Many of the 15 agencies were fully funded, or nearly so, by fees from the public from fiscal years 1991 to 1996. The Congress authorized substantial fee increases for two of these agencies, the Patent and Trademark Office (PTO) and the Nuclear Regulatory Commission (NRC), in the Omnibus Budget and Reconciliation Act of 1990 (OBRA 90). While many of these 15 agencies are primarily regulatory in nature, they also include service agencies, such as the Department of Commerce’s National Technical Information Service (NTIS) and the Postal Service. Several of these 15 agencies are involved in banking or credit regulation, such as the Office of Thrift Supervision and the National Credit Union Administration, among others. These banking and credit regulatory agencies are usually supported through examination or assessment fees on their members. Six agencies in our survey substantially increased their reliance on fees from fiscal years 1991 to 1996. Again, these agencies are primarily regulatory. The increased reliance on user charges among these agencies resulted mainly from legislative changes requiring increased collections for activities such as licenses, filings, and applications. In fiscal year 1991, FCC received less than 1 percent of its new budget authority from user fees. However, the Omnibus Budget and Reconciliation Act of 1993 (OBRA 93) increased the fees that FCC charges to cover the cost of the application and licensing of radio stations, telecommunications equipment, and radio operators, so that by fiscal year 1996 user fees made up 71 percent of the agency’s new budget authority. The fiscal year 1993 Commerce, Justice, State and the Judiciary Appropriation Act increased filing fees charged jointly by the Department of Justice and the Federal Trade Commission (FTC) to review proposed mergers. These fees had originally gone into effect in fiscal year 1990 and covered all costs associated with reviewing proposed mergers that might reduce competition. In fiscal year 1991, FTC received 18 percent of its new budget authority from user charges. With the 1993 fee increases, this grew to 69 percent by fiscal year 1996. In fiscal year 1991, the Securities and Exchange Commission (SEC) received 19 percent of its new budget authority from user charges. Beginning that year, user fees became an increasingly important component of SEC appropriations so that by fiscal year 1996 these fees made up 70 percent of the agency’s new budget authority. The U.S. Customs Service doubled its reliance on fees, as new budget authority from fees grew from 41 percent in fiscal year 1991 to 71 percent in fiscal year 1996. This increase was largely a function of the North American Free Trade Agreement Implementation Act of 1993, which extended the collection of Customs Service user fees through September 2003, increased air and sea passenger collections, and lifted air and sea passenger country exemptions through September 1997. In fiscal year 1991, the Animal and Plant Health Inspection Service (APHIS) received less than 9 percent of its new budgetary authority from user fees. APHIS’ revenues increased primarily because four programs previously funded with appropriations were converted to user fee funding between fiscal years 1991 and 1993. As a result, by fiscal year 1996 APHIS received 31 percent of new budget authority from user fees. In addition, the Bureau of Reclamation increased its reliance on user fees between fiscal years 1991 and 1996 due to, among other activities, increased offsetting receipts appropriated to carry out provisions of the Central Valley Project Improvement Act. The Congress increased the reliance of the U.S. Mint and INS on user fees, though not to the extent of the agencies noted above. Although the U.S. Mint’s collections increased, the most significant change was structural. In fiscal year 1996, the U.S. Mint was restructured to operate with a single revolving fund. Although INS fee collections from the public more than doubled between fiscal years 1991 and 1996, from $411 million to $922 million, fees as a portion of INS’s new budget authority increased only slightly due to large increases in general fund appropriations. With four agencies—the U.S. Fish and Wildlife Service, Minerals Management Service, Agricultural Marketing Service (AMS), the Grain Inspection, Packers and Stockyards Administration (GIPSA)—the percentage of their budgets funded through user fees remained stable between fiscal years 1991 and 1996. Most new user fees enacted between fiscal years 1991 and 1996 were designated as offsets to discretionary spending for BEA purposes. In fiscal year 1991, 6 of the 27 agencies, the Postal Service, APHIS, SEC, NRC, INS and FCC, had 90 percent or more of their total spending classified as discretionary spending. By fiscal year 1996, four additional agencies, PTO, NTIS, FTC, and the Panama Canal Commission, had more than 90 percent of their total spending classified as discretionary spending. Spending for two of these additional agencies, NTIS and the Panama Canal Commission, went from 100 percent mandatory spending in fiscal year 1991 to 100 percent discretionary spending by fiscal year 1996. For NTIS, this occurred when its operations were converted from a trust fund to a self-supporting revolving fund. For the Panama Canal Commission this change is attributable to the decision made by OMB in fiscal year 1993 not to include fees that offset spending in discretionary accounts in the PAYGO baseline. The Bureau of Reclamation also had a significant increase in the percent of total spending classified as discretionary. In fiscal year 1991, 60 percent of Reclamation’s total spending was classified as discretionary spending, but by fiscal year 1996 this percentage had increased to 86 percent. Of the 27 agencies in our survey, 14 saw an increase in the percent of agency spending classified as discretionary between fiscal years 1991 and 1996, 5 agencies saw a decrease, while 8 agencies showed no change. See table III.1 in appendix III for detailed information on the BEA classifications for each agency in our survey. Table 2 compares growth from current appropriations versus growth from all sources, including fees from the public, from fiscal years 1991 through 1996. See table IV.1 in appendix IV for detailed information on current and permanent appropriations for the 27 agencies in our survey. Twelve fee-reliant agencies that received current appropriations, either from general revenue or special fund appropriations, showed negative growth in appropriated funds. However, once user fees and other permanent budget authority were included, 15 of the 17 agencies shown in table 2 either increased their budgets or had decreases less than the decrease in current appropriations. For example, current appropriations for SEC declined by 8.3 percent between fiscal years 1991 and 1996. However, after fees were included in SEC’s budget totals, the agency’s budget grew 12 percent over this period. Current appropriations for the Bureau of Reclamation declined by 3.6 percent between fiscal years 1991 and 1996. However, user fee revenues helped reduce the effect of this decline on the Bureau’s budget so that it experienced only about a 1.5 percent reduction during this period. The 27 fee-reliant agencies in our review varied in how their user fees were classified, what kind of account they were deposited into, the legislative controls on the amount or use of these fees, and how they were treated under BEA. As a result, user fees for similar programs were often treated quite differently in the federal budget process. For example, some agricultural inspection fees were netted against their accounts’ budget authority and outlays, which reduced spending counted against BEA discretionary spending limits. Other agricultural fees were appropriated as new budget authority and were counted as discretionary spending. While these fees offset spending, they do so at the department and subfunction levels. In this case, the offset can be used to provide room under the spending caps elsewhere and not necessarily for the program generating the fee. Table 3 lists these 27 agencies and identifies whether, in fiscal year 1996, they received user fees from the public primarily as collections credited to an appropriation or fund accounts or as offsetting receipts. As shown in table 3, 18 of the 27 agencies we identified as fee-reliant received fees from the public primarily as collections credited to appropriation or fund accounts. Eight other agencies received fees mainly as offsetting receipts, with two of those being authorized to use fees received during the year to reduce their appropriations. In practice, this treatment is similar to those agencies that have collections credited to their appropriation or fund accounts. One agency collected fees that were not predominantly one type or the other. The following sections provide more detail on the different budgetary treatment of these user fees. See appendix II for a detailed listing of budgetary characteristics for each of the 27 agencies in our review. As shown in table 3, we identified 18 fee-reliant agencies that received user fees as collections credited to appropriation or fund accounts. This treatment was typical for agencies that conduct business-type operations and are largely self-supporting through the exchange of fees for goods or services. Most of these agencies are authorized to use these collections without further congressional action. In some cases, this reflects the belief that an agency—such as the Postal Service—might not be able to respond quickly to its customers if it were required to go through the appropriations process. In other cases, the amount collected is too insignificant or unpredictable to be separately appropriated and, instead, is included in a general operating account. Of these 18 agencies, 12 were funded entirely, or mostly so, through public enterprise funds. Fees for two agencies, the Comptroller of the Currency and commissary sales in the DOD Trust Funds, were deposited in trust revolving funds. Collections for four agencies—FCC, the Federal Trade Commission (FTC), PTO, and SEC—were deposited in general fund expenditure accounts. Collections credited to a general fund expenditure account may be broad-based and cover the agency’s operations as well as specific fee activities. Where the definition of costs charged to users was broadened to include indirect costs, this can mean that the agency is entirely supported by fees. For instance, PTO is fully funded through fees. Of PTO’s $631 million in new budget authority for fiscal year 1996, $82 million was appropriated from a special fund and $549 million from spending authority from offsetting collections. In other cases, fees may supplement an agency’s general fund appropriation. In fiscal year 1996, the Southwestern Power Administration’s operations and maintenance account was funded primarily through $30 million in current appropriations from general revenues, but received an additional $3 million in collections. Although there are exceptions, figure 1 outlines the ways collections are generally credited to appropriation or fund accounts in the budget process, differences in BEA spending categories among these collections, and subsequent BEA scoring treatment. As shown in figure 1, collections credited to appropriation or fund accounts may either (1) offset discretionary spending at the account level or (2) be treated as negative direct spending, which has no effect on discretionary spending limits. As noted above, collections credited to discretionary appropriation or fund accounts are netted against account budget authority and outlays. For example, of the $100 million in new budget authority available to FTC in fiscal year 1996, only $35 million in its Salaries and Expenses account counted towards BEA budget authority limits. The remaining $65 million from offsetting collections was not scored. In a number of instances, the Congress has authorized agencies to obligate collections for program purposes without further congressional action, a form of permanent budget authority. Although fees credited directly to revolving funds are by definition available for the agency’s use, this does not make program size solely a function of fee collections. The Congress can limit the amount available to an agency, typically through provisions in appropriation acts. Nearly a third of the agencies in our study with one or more revolving funds had a limitation on obligations between fiscal years 1991 and 1996. An agency’s use of revolving funds may also be limited through the apportionment process, which limits the amount of obligations an agency or program can incur within a particular time period, program, activity, or project. As shown in table 3, eight of the agencies we identified as fee-reliant received most of their fees from the public through offsetting receipts. Based on the agencies in our review, most that receive all or most of their funding from offsetting receipts are not entirely fee-reliant. Offsetting receipts are generally appropriated to a special, nonrevolving trust or general fund account to support the agency or activity that generated the fee. Agencies derive these fees from many of the same types of transactions as those credited as collections to appropriation or fund accounts. Although there are exceptions, figure 2 outlines the ways offsetting receipts are generally treated in the budget process, differences in BEA spending categories among these receipts, and BEA scoring treatments. As is the case for collections credited to appropriation or fund accounts, offsetting receipts also may be classified as either discretionary or mandatory spending under BEA. If offsetting receipts are classified as discretionary, they may (1) offset discretionary spending at the agency (executive department or independent agency) and subfunction level or (2) be used, in the case of some regulatory agencies, to reimburse general or special fund appropriations similar to collections credited to appropriation or fund accounts. If offsetting receipts are classified as mandatory spending, they are treated as negative direct spending and used to meet PAYGO requirements on the mandatory side of the budget. In those instances where offsetting receipts offset discretionary spending at the agency or subfunction level, they provide room under the caps for additional spending, but not necessarily for the account or agency that generated the fees. For example, fees generated by the Animal and Plant Health Inspection Service (APHIS), are treated as offsets to spending for the Department of Agriculture as a whole. The Congress may—as is true for funding from general revenues—make offsetting receipts available in a permanent appropriation. Unlike current authority in which the Congress annually sets the program level which can not be exceeded without further congressional action, permanent authority is available in the first year as well as in succeeding years. The Congress can also decide to cap the amount of fee receipts available in a given fiscal year by making the budget authority definite—that is, for a fixed amount. If the amount of fees collected exceeds the amount appropriated, then the excess fees are held in special or trust receipt accounts to be made available in subsequent years or deposited in the general fund of the Treasury. Typically, fees that are permanently appropriated are not for a fixed or definite amount; instead, the program retains for obligation whatever fees it generates. In contrast, fees that are currently appropriated are more likely to be for a specific amount. Increasingly, fees generated by some regulatory agencies are used to reimburse or reduce amounts appropriated to an agency. Agencies with this authority fund their activities either through (1) general or special fund appropriations, which their legal authority directs be reduced as fees are collected or (2) fees appropriated to special fund expenditure accounts that are then used to offset spending in another account, such as the agency’s main operating account. Two agencies in our survey—NRC and INS—are required by law to reimburse their appropriations with receipts collected during the fiscal year. This budgetary treatment is similar to the treatment of user fees collected by FTC and FCC, as both agencies’ appropriations are reduced dollar for dollar as collections are credited to their appropriation accounts. Many of these regulatory receipts, including some of those for NRC, were previously classified as governmental and deposited in the general fund of the Treasury. However, in OBRA 90, the Congress authorized NRC to charge fees to its licensees to cover all its appropriation except for that amount appropriated from the Nuclear Waste Fund. As a result, nearly all of NRC’s budget is financed by fees. For instance, in fiscal year 1996, NRC was appropriated $472.6 million which was reduced during the fiscal year by $461.6 million in offsetting collections. The result was a net fiscal year 1996 appropriation of $11 million. Although NRC had budget authority equal to $472.6 million only the $11 million was subject to BEA discretionary spending limits. Although fees are treated differently for similar activities, increasingly authority is being provided to allow fees to offset discretionary spending no matter what the source or purpose of the fee. To illustrate, table 4 shows five accounts that receive fees from the public for their inspection services. Although there are no economic differences in these transactions, there were significant differences in budgetary treatment. For purposes of BEA scoring, fees in all five accounts either reduce budget authority and outlays subject to discretionary limitations or do not affect discretionary spending because they are classified as mandatory. APHIS’ fees for agricultural quarantine inspection are a dedicated source of revenue that are earmarked for the purposes for which they were collected. Since proprietary receipts are offsetting, any fees provided in appropriations language to APHIS are offset against discretionary spending limits. However, because APHIS is part of a larger agency, the benefits of this offset accrue to the Department of Agriculture as a whole and for the agricultural research and services subfunction. NRC receipts are called offsetting governmental because they are governmental receipts by nature but are required by law to offset spending. Originally, some of NRC’s receipts were governmental and not earmarked for NRC’s use. However, beginning with OBRA 90, NRC and several other regulatory agencies were authorized to recover total agency costs or, in some cases, amounts in excess of agency costs. The appropriations language for these agencies directed that the collections be treated as offsetting. OMB subsequently created the offsetting governmental receipt classification to distinguish these receipts from other types of receipts. AMS fees for inspection and cotton and tobacco grading, and GIPSA’s grain inspection fees are both authorized to be credited directly to appropriation or fund accounts. GIPSA was classified as a mandatory account at the time of our review and therefore it and its collections were treated as direct or mandatory spending subject to PAYGO requirements. Recently, this GIPSA account was reclassified by OMB applying BEA scoring rules that, in most cases, where the Congress has provided permanent authority but has imposed an obligation limitation, the account will be treated as discretionary. The last example, AMS’s grading of agricultural commodities, is classified as mandatory spending because the fees for these activities are authorized as receipts to a trust fund. These fees and their activities are not subject to discretionary spending limits. Increased reliance on fee collections as an agency’s primary source of funding has implications for federal budgeting and management that may call for a reexamination of the basic principles as well as the actual practices underlying the treatment of fees. Offsetting can inhibit congressional tradeoffs based on the relative merit of programs and can obscure the amount of spending for fee-reliant agencies. The current trend to net fees against spending at the account or agency level offers agencies some stability, even potential growth, not available to most agencies dependent on current appropriations. However, fee-reliant agencies are faced with some unique challenges that make management of these agencies more complex. How user fees are structured reflects competing considerations and the sometimes differing interests of the Congress, OMB, agencies, and the fee payers. Some believe that agencies will have less motivation to collect and users to pay if the fees are not credited to the activity that generated the fee. Others have cautioned that earmarking fees reduces congressional flexibility in making resource decisions and can complicate agency oversight. Still others maintain that the merits of a program—not its ability to generate fees—should influence funding decisions and program size, particularly in the context of continuing reductions in overall discretionary spending. In considering these trade-offs, it is important that budgetary treatment of fees influence, but not drive, resource and management decisions. As scoring differences become more important and distinctions in fee classifications more ambiguous, fee structure is likely to be driven by budget rules that make certain designs most advantageous. The 1967 Commission on Budget Concepts could not have anticipated how discretionary caps would serve to erode the criteria it proposed to distinguish the budgetary treatment of fees. The obvious advantage of netting fees against program spending and the pressures to earmark fees for certain uses make it more likely in today’s budget environment that fees from the public will be treated as offsets to appropriations under BEA caps, regardless of whether the underlying federal activity is business or governmental in nature. An agency is likely to consider offsetting collections that are credited directly to its appropriation or fund account more advantageous than a receipt that offsets at the department and subfunction level. Moreover, inconsistencies have emerged in the budgetary treatment of fees with similar characteristics and purposes, and these differences have important implications for budgetary decisions among these programs. Any further examination of fees might include a broader range of user charges not discussed in this report, including possibly excise taxes and those fees collected under authority provided by the Independent Offices Appropriation Act of 1952. Questions could be asked, such as the following: What rules make sense for comparable types of activities? What are the best ways of presenting user fees in a unified budget that will be inclusive and consistent? Finally, recognizing that not all activities are alike, What treatment will provide the most appropriate oversight and control for a particular fee-reliant activity? Fee-reliant agencies face management issues not faced by those that depend primarily on general fund appropriations. Agencies’ reliance on fees may raise expectations that these agencies will be self-supporting, thereby prompting questions about the applicability of market or business-like principles to their funding and operations. For example, dependence on fees may cause these programs to become more vulnerable to cyclical swings in demand and fee income. This in turn raises questions about how to respond to such downturns in income, such as whether general appropriations should be used to subsidize operations if fees decline. If these agencies are expected to operate in a market environment—especially without an appropriations “safety net”—pressures to provide exemptions from government rules and regulations on procurement and personnel may arise. Balancing these with other issues, such as accountability to the Congress and the general taxpayer, will be a continuing challenge. Increasingly, agencies are being asked to provide greater accountability. Where the Congress and fee payers agree on priorities, there may be no conflict between oversight and accountability to the Congress and accountability to fee payers. However, where congressional and fee payer priorities differ, the agency may be under greater pressure to satisfy the demands of fee payers, particularly if the exchange of fee for service is voluntary. Even where there may be agreement in principle that fees should be charged for an activity, there is the possibility of increased conflict between different payers about the allocation among them. Moreover, few agencies provide purely business-type services. To the extent that fee-reliant agencies also provide services to the general public and do not receive general fund appropriations, fees may have to be set to subsidize non-fee-related costs and activities, which can prompt further conflict between the fee payers and those receiving these broader benefits. In addition, agencies with some fee-funded activities will have to redefine relationships between fee-funded and appropriated activities. Agencies will be faced with the inequities, real or perceived, that different funding sources may create. In addition to any perceived funding imbalances between fee and appropriations supported programs, management challenges can arise from differences in their funding status. For example, during times of government shutdowns, programs with authority to obligate fees without congressional action were among those able to continue operating while programs and staff funded solely through current appropriations were shutdown and furloughed. OMB officials agreed that different budgetary treatments have occurred as agencies have sought and the Congress has enacted laws that allow agencies to use the fees they generate to offset spending. Several comments by OMB officials suggested that offsetting correctly applied, that is, offsetting that results from business-like activities, does not inhibit tradeoffs between programs or limit congressional flexibility in decision-making because this type of spending is self-controlling. These comments assume that it is possible to make clear distinctions between business-like and governmental activities. Although the 1967 President’s Commission on Budget Concepts recommended a dual system of accounting based on these distinctions, such distinctions have been difficult to make in practice. A clear line between governmental and business-type activities is even less likely to be applied in the future given the overwhelming benefits of offsetting under BEA discretionary spending limits. OMB officials also provided a number of technical and clarifying comments, which we incorporated in the report where appropriate. We are sending copies of this report to other interested Members of the Congress and the Director of the Office of Management and Budget. We will make copies available to others on request. Please call me at (202) 512-9142 if you or your staff have any questions. This report was prepared under the direction of Barbara Bovbjerg. Major contributors were Denise Fantone, Tim Minelli, Carlos Diz (Attorney-Advisor), John Mingus, and Paul Yoon (Intern). To better understand the roles fees have had as a funding source since BEA was enacted, we reviewed all agencies in which fees for business or regulatory services to the public provided a significant and continuing source of funding. Twenty-seven agencies met our criteria. We defined our universe of fee-reliant agencies using the following criteria: (1) fees from the public must be used to support the agency that generated the fee, (2) services, goods, or benefits must be provided in exchange for fees and the exchange should be closely linked in time, and (3) budget authority from fees from the public must represent 20 percent or more of the agency’s gross outlays from federal sources averaged over fiscal years 1991 through 1996. This excludes offsetting collections and associated outlays from federal sources. To meet these criteria, we excluded (1) agencies that collected fees that were deposited in the general fund of the Treasury instead of designated for the agency’s use, (2) insurance and retirement programs because of the delay between when a fee is paid and when there is a pay out, although many of these programs, such as the Federal Deposit Insurance Corporation are totally self-supporting, (3) credit programs because they involve subsidies, (4) agencies that receive fees from the public that are then transferred to another federal agency or a state, (5) agencies that receive fees from the public intermittently, (6) Government-sponsored enterprises, such as the Federal Reserve, which receive funding from the public, but are classified as private and not included in the federal budget, and (7) agencies that receive most of their fees from other federal agencies, although many of the agencies included have some funding from this source. Certain accounts of the agencies we identified as fee-reliant were excluded from subsequent analyses. Insurance and credit accounts were excluded from Bureau of Reclamation and National Credit Union Administration totals, except for the examination and regulatory fees that are deposited in the Credit Union Share Insurance Fund and transferred to the Operating Fund account. Other DOD Trust Funds includes only those accounts that received offsetting collections from the public for commissary sales. Accounts on gift funds and separation pay were excluded from Other DOD Trust Funds totals. The Funds for Strengthening Markets, Income, and Supply account was excluded from Agricultural Marketing Service totals because most of its funding is transferred to other programs, principally child nutrition. Also, the Universal Service Fund and the Spectrum Auction Program account were excluded from Federal Communication Commission totals. The 27 agencies span 7 of the 13 appropriations subcommittees: (1) Agriculture and Rural Development, (2) Commerce, Justice, State, and the Judiciary, (3) Defense, (4) Energy and Water Development, (5) Interior and Related Agencies, (6) Treasury, Postal, and General Government, and (7) Veterans Administration, Housing and Urban Development, and Independent Agencies. As used in this report, the term “agency” refers to the grouping of activities shown as “bureaus” or listed as “other independent agencies” in the President’s budget request to the Congress. The bureau designation generally corresponds to a subordinate organization in an executive department. Although this structure will include both fee and non-fee programs and activities, we selected this level of aggregation because it is organizationally comprehensive and more readily understood than either of the alternatives, appropriation account or program activity. For example, the U.S. Fish and Wildlife Service is more recognizable an entity than the various accounts, such as Resource Management, Migratory Bird Conservation, and Sport Fish Restoration, that make up the agency. In two cases the bureau designation does not correspond with a single entity. Other DOD Trust Funds described above, and the Power Marketing Administrations, which includes 5 separate organizational entities. To review the classification and treatment of fees in budget accounts, we used OMB codes created for the President’s annual budget request. These codes allowed us to identify (1) fees from the public and their classification as an offsetting receipt or a collection credited to an appropriation or fund account, (2) the type of expenditure account the fee is credited to, and (3) the fee’s availability for obligation in a given fiscal year. Using OMB actual year data we constructed a series of analyses that described trends in budget authority and collections for these agencies. Our trend data only includes new appropriations and spending authority available to an agency and not funding from unobligated balances. We did not include unobligated balances because the data coding did not distinguish between unobligated balances from fees and those from other sources. Data in this report cover fiscal years 1991 through 1996. OMB’s codes also allowed us to track changes in discretionary versus mandatory classifications of agency funding and shifts from current to permanent budget or spending authority. Our observations are based on 6 years of OMB data for those agencies selected. This work describes overall trends in 27 fee-financed agencies, but is not generalizable to all agencies with fees. We also conducted interviews in six agencies: Agricultural Marketing Service, National Technical Information Service, Nuclear Regulatory Commission, Patent and Trademark Office, Securities and Exchange Commission, and U.S. Fish and Wildlife Service. Fees from the public (percent of outlays) X (2) This table includes only those agency accounts that have fees from the public. An agency may have additional accounts that only get general fund appropriations or were excluded as noted in appendix I. Revolving funds include public enterprise and trust revolving funds. The Federal Communications Commission’s Salaries and Expenses account had a limitation on spending authority from offsetting collections for fiscal year 1996. The Federal Trade Commission’s Salaries and Expenses account had a limitation on spending authority from offsetting collections for fiscal year 1996. The Budget Enforcement Act of 1990, as amended, divided spending at the budget account level into two broad categories: discretionary and mandatory. BEA classification is assigned to expenditure accounts within agencies. Some accounts may have both mandatory and discretionary funds and they are identified separately for BEA scoring purposes. Legislative changes to mandatory spending enacted in a given fiscal year are required to be deficit neutral in the aggregate. Discretionary spending is held to fixed annual limits. Table III.1 shows the change in classification from fiscal year 1991 to fiscal year 1996 for accounts in the 27 agencies we reviewed. During this time, there was an increase in the percentage of spending classified as discretionary spending for 14 of the 27 agencies. Four agencies showed decreases in spending classified as discretionary, while spending classifications for nine agencies in our survey did not change. (percent of funding) (percent of funding) 25 percent or more change from mandatory to discretionary Less than 25 percent change from mandatory to discretionary Animal and Plant Health Inspection Service Grain Inspection, Packers and Stockyards Administration No change: mandatory to discretionary (continued) (percent of funding) (percent of funding) Most of the change in classification is attributable to a decision made by OMB in fiscal year 1993 not to include fees that offset spending in discretionary accounts in the PAYGO baseline. A technical revision by OMB for the fiscal year 1995 budget (OMB Circular A-11, Preparation and Submission of the Budget Estimates, Sec. 21.2, p. 62 (August 4, 1993)) clarified that collections credited to discretionary appropriation or fund accounts would be classified as discretionary. According to OMB, prior to this change double counting had occurred in preparing the federal budgets after enactment of BEA. Collections credited to appropriation or fund accounts were counted as mandatory receipts (because they had permanent spending authority), and, at the same time, were netted against discretionary spending according to budget concept rules. Although this correction did not change the impact of collections credited to appropriation or fund accounts on discretionary spending, it means that if discretionary spending were ever to exceed the annual caps, all discretionary resources, including these offsetting collections, would be subject to sequestration. For example, if a sequestration had occurred in fiscal year 1996, all of FCC’s budget authority in its Salaries and Expenses account would have been subject to sequestration. For FCC this would have been $202 million. However, without the need for sequestration, only $59 million, or the net budget authority, is counted as discretionary spending. Current and permanent appropriations refer to the timing of legislative action in making budget authority available to an agency. When budget authority is enacted permanently, it is available until spent. Such authority can be the result of substantive legislation or appropriations acts. When budget authority is enacted as current authority, the appropriations language specifies how long the funds will be available. In general, current appropriations are classified as discretionary and are under the jurisdiction of the appropriations committees and their subcommittees. While there are exceptions, permanent appropriations are more likely to be classified as direct, or mandatory, spending and be under the jurisdiction of authorizing committees. Table IV.1 shows that from fiscal years 1991 through 1996 spending authorized as permanent increased for 13 of the 27 agencies in our survey. Twelve other agencies did not see changes in the percent of the their funding classified as either permanent or current. The remaining two agencies, GIPSA and APHIS, had small declines in funding from what was permanently appropriated. of funding) funding) Change in current budget authority (percent) (percent) Animal and Plant Health Inspection Service Grain Inspection, Packers and Stockyards Administration Subtotal: All Other Subcommittee Agencies Subtotal: All Other Subcommittee Agencies Subtotal: All Other Subcommittee Agencies (continued) Change in current budget authority (percent) (percent) NTIS received small appropriations in fiscal years 1993 and 1995. Under the broadest definition of the term, a department, agency, or instrumentality of the U.S. government (31 U.S.C. 101). However, statutes and regulations often include specific definitions of the term “agency” (or related terms like “executive agency” or “federal agency”). Accounts used by the federal government to record outlays (expenditure accounts) and income (receipt accounts) primarily for budgeting or management information purposes but also for accounting purposes. There are six types of federal budget accounts and all can receive user fees from the public. General Fund Accounts: These accounts are composed of all federal funds not allocated to any other account and are generally credited with collections not earmarked by law for a specific purpose. Some general fund accounts receive earmarked offsetting collections that are credited directly to the appropriation or fund account and are available for use, often without further legislative action. One such account, the Bureau of Reclamation’s Water and Related Resources, is credited with offsetting collections from federal and nonfederal sources. Special Fund Accounts: These accounts record receipts collected from a specific source and earmarked by law for a specific purpose. They are essentially trust funds except not so designated by law. The Fish and Wildlife Service’s Land and Water Conservation Fund is an example of a special fund. Nonrevolving Trust Fund Accounts: These accounts record revenues collected for a specific purpose or for a program designated in law as trust funds. Nonrevolving trust fund accounts finance programs such as Social Security, Medicare, and Superfund. Public Enterprise Revolving Fund Accounts: These accounts receive funding generated in a continuing cycle of business-type operations primarily from nonfederal sources. Examples include the Postal Service and the United States Enrichment Corporation. Revolving Trust Fund Accounts: These accounts receive revenues generated in business-type operations and are designated as trust funds by statute. A revolving trust fund finances the bank regulatory activities of the Comptroller of the Currency. Intragovernmental Fund Accounts: These accounts receive primarily federal funding either from organizations within a department or other federal agencies, such as working capital. The general fund, special fund, and nonrevolving trust fund accounts have both a receipt account, which are credited with collections, and an expenditure account, to which appropriations are made and outlays recorded. The three revolving accounts—public enterprise, trust revolving, and intragovernmental fund—are appropriation or fund accounts that are credited directly with collections and do not require a separate receipt account. Authority provided by law to enter into financial obligations that will result in immediate or future outlays involving federal government funds. The basic forms of budget authority include the following: Appropriations: An act of the Congress that permits federal agencies to incur obligations and to make payments out of the Treasury for specified purposes. An appropriations act is the most common means of providing budget authority. Borrowing Authority: Statutory authority that permits a federal agency to incur obligations and to make payments for specified purposes out of money borrowed from the Treasury or the public. Contract Authority: Statutory authority that permits a federal agency to enter into contracts in advance of appropriations. Offsetting Collections and Receipts: Authority to obligate and expend the proceeds of offsetting receipts and collections. Budget authority provided in laws other than appropriation acts is termed spending authority. Spending authority includes contract authority, authority to borrow, and entitlement authority for which the borrowing authority is not provided in advance by appropriation acts. Budget authority may be classified by its duration (1-year, multiple-year, or no-year), by the timing of the legislation providing the authority (current or permanent), by the manner of determining the amount available (definite or indefinite), or by its availability for new obligations. BEA divides spending into two types—discretionary spending and direct or mandatory spending. Discretionary spending is controlled through annual appropriations acts. Direct or mandatory spending is controlled by permanent laws. BEA constrains discretionary spending differently from mandatory spending and receipts. During the period of our review, discretionary spending was constrained by dollar limits (“caps”) on total budget authority and outlays for this category for each fiscal year through 1998. In fiscal year 1997 BEA was extended through 2002. Discretionary spending was subdivided further into spending limits for defense, non-defense and violent crime reduction in fiscal years 1998 and 1999, and discretionary and violent crime reduction in fiscal year 2000. If the amount of budget authority provided in appropriations acts for the year exceeds the discretionary cap on budget authority, or the amount of outlays estimated to result from this budget authority is estimated to exceed the discretionary caps on outlays, BEA specifies a procedure, called sequestration, for reducing discretionary spending. Under a sequester, spending for most discretionary programs is reduced by a uniform percentage. Special rules apply in reducing some programs and some programs are exempt from sequester by law. BEA constrains mandatory spending and receipts differently. Laws that would increase mandatory spending or decrease receipts are constrained through “pay-as-you-go” (PAYGO) rules. Under these rules, the cumulative effects of legislation affecting mandatory spending or receipts must not increase the deficit. For a complete description of the Budget Enforcement Act, see chapter 24 of the Analytical Perspectives volume of the Budget of the United States Government Fiscal Year 1998. Under the Budget Enforcement Act, discretionary spending limits, or spending caps, are maximum amounts of new budget authority and outlays for specific categories of discretionary appropriations. Discretionary appropriations are budgetary resources provided in appropriation acts. A fiscal year is a 12-month accounting period. The fiscal year for the federal government begins October 1 and ends September 30. The fiscal year is designated by the calendar year in which it ends; for example fiscal year 1998 is the year beginning October 1, 1997, and ending September 30, 1998. The purpose of the IOAA, also called the “User Charge Statute,” was to distribute the costs of government services to those who received benefits beyond those provided to the general public. The statute gave agencies for the first time broad statutory authority to set fees through administrative regulation. Any fees collected under this authority are deposited in the general fund of the Treasury and not credited to the agency or activity generating the fees. All collections by government accounts from other government accounts and any collections from the public that are of a business-type or market-oriented nature. They are classified into two major categories: (1) offsetting receipts, which are amounts deposited in receipt accounts and (2) collections credited to appropriation or fund accounts. Offsetting Receipts: Offsetting receipts are amounts deposited in receipt accounts. Offsetting receipts cannot be used without being appropriated. However, a significant portion of such collections, for example, most trust fund offsetting receipts are permanently appropriated and, therefore, can be used without subsequent appropriation legislation. The Congressional Budget Act of 1974, as amended by the Budget Enforcement Act of 1990, defines offsetting receipts and collections as negative budget authority and the reductions thereof as positive budget authority. Offsetting receipts are subdivided into three categories: Intragovernmental Transactions are payments into receipt accounts from governmental appropriations or fund accounts. They are treated as offsets to budget authority and outlays rather than as governmental receipts. Offsetting Governmental Receipts are governmental in nature but are required by law to be treated as offsetting. Proprietary Receipts From the Public are collections from outside the government which are deposited in receipt accounts that arise as a result of the government’s business-type or market-oriented activities. Among these are interest received, proceeds from the sale of property and products, charges for nonregulatory services, and rents and royalties. Such collections may be credited to general fund, special fund, or trust fund receipt accounts and are offset against budget authority and outlays. In most cases, such offsets are by agency and by subfunction but some proprietary receipts are deducted from total budget authority and outlays for the government as a whole. Collections Credited to Appropriation or Fund Accounts: These collections include all revolving funds and some appropriation accounts. Laws authorize collections to be credited directly to appropriation or fund accounts and may make them available for obligation to meet the account’s purpose without further legislative action. However, it is not uncommon for annual appropriations acts to include limitations on the obligations to be financed by these collections. The issuance of checks, disbursement of cash, or electronic transfer of funds made to liquidate a federal obligation. Outlays also occur when interest on the Treasury debt held by the public accrues and when the government issues bonds, notes, debentures, monetary credits, or other cash-equivalent instruments in order to liquidate obligations. Also, under credit reform, the credit subsidy cost is recorded as an outlay when a direct or guaranteed loan is disbursed. Outlays during a fiscal year may be for payment of obligations incurred in prior years (prior-year obligations) or in the same year. Outlays, therefore, flow in part from unexpended balances of prior-year budgetary resources and in part from budgetary resources provided for the year in which the money is spent. Under the Budget Enforcement Act, the principle that all direct spending and tax legislation enacted after BEA for a fiscal year must be deficit-neutral in the aggregate. If the Congress enacts direct spending or receipts legislation that causes a net increase in the deficit, it must offset that increase by either increasing revenues or decreasing another direct spending program in the same fiscal year. This requirement is enforced by sequestration. Federal Electricity Activities: The Federal Government’s Net Cost and Potential for Future Losses, Volumes I and II (GAO/AIMD-97-110 and GAO/AIMD-97-110A, September 19, 1997). Intellectual Property: Fees Are Not Always Commensurate With the Costs of Services (GAO/RCED-97-113, May 9, 1997). Food-Related Services: Opportunities Exist to Recover Costs by Charging Beneficiaries (GAO/RCED-97-57, March 20, 1997). Airport and Airway Trust Fund: Issues Related to Determining How Best to Finance FAA (GAO/T-RCED-97-59, February 5, 1997). U.S. Forest Service: Fees for Recreation Special-Use Permits Do Not Reflect Fair Market Value (GAO/RCED-97-16, December 20, 1996). Airport and Airway Trust Fund: Issues Raised by Proposal to Replace the Airline Ticket Tax (GAO/RCED-97-23, December 9, 1996). Power Marketing Administrations: Cost Recovery, Financing, and Comparison to Nonfederal Utilities (GAO/AIMD-96-145, September 19, 1996). Analysis of USDA’s Budgets, Fiscal Years 1996-97 (GAO/RCED-96-182R, June 7, 1996). U.S. Forest Service: Fee System for Right-of-Way Program Needs Revision (GAO/RCED-96-84, April 22, 1996). Federal Research: Information on Fees for Selected Federally Funded Research and Development Centers (GAO/RCED-96-31FS, December 8, 1995). Budget Issues: Earmarking in the Federal Government (GAO/AIMD-95-216FS, August 1, 1995). Federal Lands: Views on Reform of Recreation Concessioners (GAO/T-RCED-95-250, July 25, 1995). USDA User Fees (GAO/RCED-95-229R, June 23, 1995). Customs Service: Passenger User Fee Needs to Be Reevaluated (GAO/GGD-95-138, May 22, 1995). USDA License Fees: Analysis of the Solvency and Users of the Perishable Agriculture Commodities Act Program (GAO/T-RCED-95-135, March 16, 1995). ATFI User Fees (GAO/AIMD-95-93R, March 10, 1995). IRS User Fees (GAO/GGD-95-58R, December 15, 1994). FDA User Fees: Current Measures Not Sufficient for Evaluating Effect on Public Health (GAO/PEMD-94-26, July 22, 1994). Federal Lands: Fees for Communications Sites Are Below Fair Market Value (GAO/RCED-94-248, July 12, 1994). Customs Service: Information on User Fees (GAO/GGD-94-165FS, June 17, 1994). Highway User Fees: Updated Data Needed to Determine Whether All Users Pay Their Fair Share (GAO/RCED-94-181, June 7, 1994). INS User Fees: INS Working to Improve Management of User Fee Accounts (GAO/GGD-94-101, April 12, 1994). User Fees for Firearms Licenses (GAO/GGD-94-111R, March 14, 1994). Futures Markets: A Futures Transaction Fee Is Administratively Feasible (GAO/GGD-94-40, October 28, 1993). FDA’s FOIA Fees (GAO/GGD-93-47R, June 10, 1993). Forest Service: Little Assurance That Fair Market Value Fees Are Collected From Ski Areas (GAO/RCED-93-107, April 16, 1993). USDA Revenues: A Descriptive Compendium (GAO/RCED-93-19FS, November 27, 1992). Nuclear Waste: Status of Actions to Improve DOE User-Fee Assessments (GAO/RCED-92-165, June 10, 1992). Child Support Enforcement: Opportunity to Defray Burgeoning Federal and State Non-AFDC Costs (GAO/HRD-92-91, June 5, 1992). U.S. Customs Service: Limitations in Collecting Harbor Maintenance Fees (GAO/GGD-92-25, December 23, 1991). U.S. Courts: Estimated User Fees to Pay for New Facilities (GAO/GGD-92-8BR, December 10, 1991). Patent and Trademark Office: Impact of Higher Patent Fees on Small-Entity and Federal Agency Users (GAO/RCED-92-19BR, October 11, 1991). Rangeland Management: Current Formula Keeps Grazing Fees Low (GAO/RCED-91-185BR, June 11, 1991). Nuclear Waste: Changes Needed in DOE User-Fee Assessments (GAO/T-RCED-91-52, May 8, 1991). Nuclear Waste: Changes Needed in DOE User-Fee Assessments to Avoid Funding Shortfall (GAO/RCED-90-65, June 7, 1990). User Fees: Limited Survey of User Fees at the Departments of Commerce and the Interior (GAO/AFMD-90-53BR, March 23, 1990). Standards and Technology: Update of Information About Fee Increases for Measurement Services (GAO/RCED-90-63BR, January 12, 1990). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO identified 27 agencies that rely on federal user fees for a significant portion of their budget, focusing on: (1) identifying of changes in agency reliance on user fees since passage of the Budget Enforcement Act (BEA) of 1990; (2) describing the ways user fees are structured in the budget, including what budgetary controls govern the availability and use of theses fees and how they are treated under BEA; and (3) identifying of issues for consideration in the future design and management of user fees. GAO noted that: (1) since the 1990 enactment of BEA, the 27 agencies in its review have increased or maintained their reliance on user fees as a source of funding; (2) several of the regulatory agencies GAO surveyed were given authority to substantially increase user fee collections and to use these fees for program purposes; (3) importantly, most new user fees enacted between fiscal years 1991 and 1996 were authorized to offset discretionary spending; (4) the Congress authorized new user fees either: (a) to maintain program size by replacing general fund appropriations, which may then be used to fund other activities or (b) to increase program size without increasing budget authority and outlay totals; (5) this growth in new and existing fees does not add to the amount of spending that is scored under BEA discretionary spending limits, but frees up discretionary resources for other purposes; (6) although federal agencies often collect user fees for similar purposes, not all user fees are treated alike in the federal budget; (7) some user charges must be deposited in the general fund of the U.S. Treasury, while others are required by law to provide funding for specific purposes; (8) yet, even when fees are dedicated to the agency or activities that generated the fee, there are differences in when and how the fees are made available to the agency and in how much flexibility agencies have in using the fee revenue; (9) the attempt to distinguish between fees collected for the government's business-type activities from those derived from the government's power to tax was always problematic; (10) how fees are categorized has become increasingly important by the fact that under BEA scoring rules some fees are netted against their accounts' budget authority and outlay spending and not counted against discretionary spending limits; (11) the disparate treatment of fees--particularly those associated with discretionary spending--raises issues for congressional control, agency management, and competition for limited federal resources; (12) in shifting to a more fee-reliant government, inconsistencies in budgetary treatment of fees with similar characteristics are likely to increase; (13) unlike agencies that rely primarily on appropriated funds from general revenues, both the Congress and fee-reliant agencies face additional policy and management issues such as how to: (a) meet the needs for accountability to both the Congress and fee payers, including agreement on priorities and the appropriate assessment of fees and (b) define the relationship between fee-financed and appropriated activities, particularly if resource disparities between the two groups increase.
U.S. federal financial regulators have made progress in implementing provisions of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) and related reforms to restrict future government support and reduce the likelihood and impacts of the failure of a systemically important financial institution (SIFI). These reforms can be grouped into four general categories: (1) restrictions on regulators’ emergency authorities to provide assistance to financial institutions; (2) new tools and authorities for regulators to resolve a failing SIFI outside of bankruptcy if its failure would have serious adverse effects on the U.S. financial system; (3) enhanced regulatory standards for SIFIs related to capital, liquidity, and risk management; and (4) other reforms intended to reduce the potential disruptions to the financial system that could result from a SIFI’s failure. We found that while views varied among market participants with whom we spoke, many believed that recent regulatory reforms have reduced but not eliminated the likelihood the federal government would prevent the failure of one of the largest bank holding companies. Citing recent reforms, two of the three largest credit rating agencies reduced or eliminated “uplift”—an increase in the credit rating—they had assigned to the credit ratings of eight of the largest bank holding companies due to their assumptions of government support for these firms. Credit rating agencies and large investors cited the new Orderly Liquidation Authority, which gives the Federal Deposit Insurance Corporation new authority to resolve large financial firms outside of the bankruptcy process, as a key factor influencing their views. While several large investors viewed the resolution process as credible, others cited potential challenges, such as the risk that multiple failures of large firms could destabilize markets. Remaining market expectations of government support can benefit large bank holding companies to the extent that these expectations affect decisions by investors, counterparties, and customers of these firms. For example, market beliefs about government support could benefit a firm by lowering its funding costs to the extent that providers of funds—such as depositors, bond investors, and stockholders—rely on credit ratings that assume government support or incorporate their own expectations of government support into their decisions to provide funds. Second, higher credit ratings from assumed government support can benefit firms through private contracts that reference credit ratings such as derivative contracts that tie collateral requirements to a firm’s credit rating. Finally, expectations of government support can affect a firm’s ability to attract customers to varying degrees. New and higher fees imposed by the Dodd-Frank Act, stricter regulatory standards, and other reforms could increase costs for the largest bank holding companies relative to smaller competitors. Officials from the Financial Stability Oversight Council (FSOC) and its member agencies have stated that financial reforms have not completely removed too-big- to-fail perceptions but have made significant progress toward doing so. According to Department of the Treasury (Treasury) officials, key areas that require continued progress include education of market participants on reforms and international coordination on regulatory reform efforts, such as creating a viable process for resolving a failing financial institution with significant cross-border activities. We analyzed the relationship between a bank holding company’s size and its funding costs, taking into account a broad set of other factors that can influence funding costs. To inform this analysis and to understand the breadth of methodological approaches and results, we reviewed selected studies that estimated funding cost differences between large and small financial institutions that could be associated with the perception that some institutions are too big to fail. Studies we reviewed generally found that the largest financial institutions had lower funding costs during the 2007-2009 financial crisis but that the difference between the funding costs of the largest and smaller institutions has since declined. However, these empirical analyses contain a number of limitations that could reduce their validity or applicability to U.S. bank holding companies. For example, some studies used credit ratings, which provide only an indirect measure of funding costs. In addition, studies that pooled a large number of countries in their analysis have results that may not be applicable to U.S. bank holding companies and studies that did not include data past 2011 have results that may not reflect recent changes in the regulatory environment. Our analysis, which addresses some limitations of these studies, suggests that large bank holding companies had lower funding costs than smaller ones during the financial crisis but provides mixed evidence of such advantages in recent years. However, most models suggest that such advantages may have declined or reversed. To conduct our analysis, we developed a series of econometric models— models that use statistical techniques to estimate the relationships between quantitative economic and financial variables—based on our assessment of relevant studies and expert views. These models estimate the relationship between bank holding companies’ bond funding costs and their size, while also controlling for other drivers of bond funding costs, such as bank holding company credit risk. Key features of our approach include the following: U.S. bank holding companies. To better understand the relationship between bank holding company funding costs and size in the context of the U.S. economic and regulatory environment, we only analyzed U.S. bank holding companies. In contrast, some of the literature we reviewed analyzed nonbank financial companies and foreign companies. 2006-2013 time period. To better understand the relationship between bank holding company funding costs and size in the context of the current economic and regulatory environment, we analyzed the period from 2006 through 2013, which includes the recent financial crisis as well as years before the crisis and following the enactment of the Dodd-Frank Act. In contrast, some of the literature we reviewed did not analyze data in the years after the financial crisis. Bond funding costs. We used bond yield spreads—the difference between the yield or rate of return on a bond and the yield on a Treasury bond of comparable maturity—as our measure of bank holding company funding costs because they are a direct measure of what investors charge bank holding companies to borrow money and because they are sensitive to credit risk and hence expected government support. This indicator of funding costs has distinct advantages over certain other indicators used in studies we reviewed, including credit ratings, which do not directly measure funding costs, and total interest expense, which mixes the costs of funding from multiple sources. Alternative measures of size. Size or systemic importance can be measured in multiple ways, as reflected in our review of the literature. Based on that review and the comments we received from external reviewers, we used four different measures of size or systemic importance: total assets, total assets and the square of total assets, whether or not a bank holding company was designated a global systemically important bank by the Financial Stability Board in November 2013, and whether or not a bank holding company had assets of $50 billion or more. Extensive controls for bond liquidity, credit risk, and other key factors. To account for the many factors that could influence funding costs, we controlled for credit risk, bond liquidity, and other key factors in our models. We included a number of variables that are associated with the risk of default, including measures of capital adequacy, asset quality, earnings, and volatility. We also included a number of variables that can be used to measure bond liquidity. Finally, we included variables that measure other key characteristics of bonds, such as time to maturity, and key characteristics of bank holding companies, such as operating expenses. Our models include a broader set of controls for credit risk and bond liquidity than some studies we reviewed and we directly assess the sensitivity of our results to using alternative controls on our estimates of funding costs. Multiple model specifications. In order to assess the sensitivity of our results to using alternative measures of size, bond liquidity, and credit risk, we estimated multiple different model specifications. We developed models using four alternative measures of size, two alternative sets of measures of capital adequacy, six alternative measures of volatility, and three alternative measures of bond liquidity. In contrast, some of the studies we reviewed estimated a more limited number of model specifications. Link between size and credit risk. To account for the possibility that investors’ beliefs about government rescues affect their responsiveness to credit risk, our models allow the relationships between bank holding company funding costs and credit risk to depend on size. Altogether, we estimated 42 different models for each year from 2006 through 2013 and then used those models to compare bond yield spreads—our measure of bond funding costs—for bank holding companies of different sizes but with the same level of credit risk. Figure 1 shows our models’ comparisons of bond funding costs for bank holding companies with $1 trillion in assets and average credit risk and bond funding costs for similar bank holding companies with $10 billion in assets, for each model and for each year. Each circle and dash in figure 1 shows the comparison for a different model. Circles show model- estimated differences that were statistically significant at the 10 percent level, while dashes represent differences that were not statistically significant at that level. Circles and dashes below zero correspond to models suggesting that bank holding companies with $1 trillion in assets have lower bond funding costs than bank holding companies with $10 billion in assets, and vice versa. For example, for 2013, a total of 18 models predicted statistically significant differences above zero and a total of eight models predicted statistically significant differences below zero. Our analysis provides evidence that the largest bank holding companies had lower funding costs during the 2007-2009 financial crisis but that these differences may have declined or reversed in recent years. However, we found that the outcomes of our econometric models varied with the various controls we used to capture size, credit risk, and bond liquidity. This variation indicates that uncertainty related to how to model funding costs has an important impact on estimated funding cost differences between large and small bank holding companies. As figure 1 shows, most models found that larger bank holding companies had lower bond funding costs than smaller bank holding companies during the 2007-2009 financial crisis, but the magnitude of the difference varied widely across models, as indicated by the range of results for each year. For example, for 2008, our models suggest that bond funding costs for bank holding companies with $1 trillion in assets and average credit risk were from 17 to 630 basis points lower than bond funding costs for similar bank holding companies with $10 billion in assets. Our models’ comparisons of bond funding costs for different-sized bank holding companies for 2010 through 2013 also vary widely. For bank holding companies with average credit risk, more than half of our models suggest that larger bank holding companies had higher bond funding costs than smaller bank holding companies from 2011 through 2013, but many models suggest that larger bank holding companies still had lower bond funding costs than smaller ones during this period. For example, for 2013, our models suggest that bond funding costs for average credit risk bank holding companies with $1 trillion in assets ranged from 196 basis points lower to 63 basis points higher than bond funding costs for similar bank holding companies with $10 billion in assets (see fig. 1). For 2013, 30 of our models suggest that the larger banks had higher funding costs, and 12 of our models suggest that the larger banks had lower funding costs. To assess how investors’ beliefs that the government will support failing bank holding companies have changed over time, we compared bond funding costs for bank holding companies of various sizes while holding the level of credit risk constant over time at the average for 2008—a relatively high level of credit risk that prevailed during the financial crisis. In these hypothetical scenarios, most models suggest that bond funding costs for larger bank holding companies would have been lower than bond funding costs for smaller bank holding companies in most years from 2010 to 2013. For example, most models for 2013 predict that bond funding costs for larger bank holding companies would be higher than for smaller bank holding companies at the average level of credit risk in that year, but would be lower at financial crisis levels of credit risk (see fig. 2). These results suggest that changes over time in funding cost differences we estimated (depicted in fig. 1) have been driven at least in part by improvements in the financial condition of bank holding companies. At the same time, more models predict lower bond funding costs for larger bank holding companies in 2008 than in 2013 when we assume that financial crisis levels of credit risk prevailed in both years, which suggests that investors’ expectations of government support have changed over time. However, it is important to note that the relationships between variables estimated by our models could be sensitive to the average level of credit risk among bank holding companies, making these estimates of the potential impact of the level of credit risk from 2008 in the current environment even more uncertain. Moreover, Dodd-Frank Act reforms discussed earlier in this statement, such as enhanced regulatory standards for capital and liquidity, could enhance the stability of the U.S. financial system and make such a credit risk scenario less likely. This analysis builds on certain aspects of prior studies, but our estimates of the relationship between the size of a bank holding company and the yield spreads on its bonds are limited by several factors and should be interpreted with caution. Our estimates of differences in funding costs reflect a combination of several factors, including investors’ beliefs about the likelihood that a bank holding company will fail, the likelihood that it will be rescued by the government if it fails, and the size of the losses that the government may impose on investors if it rescues the bank holding company. Like the methodologies used in the literature we reviewed, our methodology does not allow us to precisely identify the influence of each of these components. As a result, changes over time in our estimates of the relationship between bond funding costs and size may reflect changes in one or more of these components, but we cannot identify which with certainty. In addition, these estimates may reflect factors other than investors’ beliefs about the likelihood of government support and may also reflect differences in the characteristics of bank holding companies that do and do not issue bonds. If a factor that we have not taken into account is associated with size, then our results may reflect the relationship between bond funding costs and this omitted factor instead of, or in addition to, the relationship between bond funding costs and bank holding company size. Finally, our estimates are not indicative of future trends. After reviewing the draft report, Treasury provided general comments and Treasury, FDIC, the Federal Reserve Board, and OCC provided technical comments. In its written comments, Treasury commented that our draft report represents a meaningful contribution to the literature and that our results reflect increased market recognition that the Dodd-Frank Act ended “too big to fail” as a matter of law. While our results do suggest bond funding cost differences between large and smaller bank holding companies may have declined or reversed since the 2007-2009 financial crisis, we also found that a higher credit risk environment could be associated with lower bond funding costs for large bank holding companies than for small ones. Furthermore, as we have noted, many market participants we spoke with believe that recent regulatory reforms have reduced but not eliminated the perception of “too big to fail” and both they and Treasury officials indicated that additional steps were required to address “too big to fail.”As discussed, changes over time in our estimates of the relationship between bond funding costs and size may reflect changes in one or more components of investors’ beliefs about government support—such as their views on the likelihood that a bank holding company will fail and the likelihood it will be rescued if it fails—but we cannot precisely identify the influence of each factor with certainty. In addition, Treasury and other agencies provided via email technical comments related to the draft report’s analysis of funding cost differences between large and small bank holding companies. We incorporated these comments into the report, as appropriate. A complete discussion of the agencies’ comments and our evaluation are provided in the report. Chairman Brown, Ranking Member Toomey, and Members of the Subcommittee, this concludes my prepared remarks. I would be happy to answer any questions that you or other Members of the Subcommittee may have. For future contacts regarding this statement, please contact Lawrance L. Evans, Jr. at (202) 512-4802 or at evansl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other GAO staff who made significant contributions to this statement and the report it is based on include: Karen Tremba, Assistant Director; John Fisher (Analyst-in-Charge); Bethany Benitez; Michael Hoffman; Risto Laboski; Courtney LaFountain; Rob Letzler; Marc Molino; Jason Wildhagen; and Jennifer Schwartz. Other assistance was provided by Abigail Brown; Rudy Chatlos; Stephanie Cheng; and José R. Peña. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony summarizes the information contained in GAO's Month Year report, entitled Large Bank Holding Companies: Expectations of Government Support, GAO-14-621 . While views varied among market participants with whom GAO spoke, many believed that recent regulatory reforms have reduced but not eliminated the likelihood the federal government would prevent the failure of one of the largest bank holding companies. Recent reforms provide regulators with new authority to resolve a large failing bank holding company in an orderly process and require the largest bank holding companies to meet stricter capital and other standards, increasing costs and reducing risks for these firms. In response to reforms, two of three major rating agencies reduced or removed the assumed government support they incorporated into some large bank holding companies’ overall credit ratings. Credit rating agencies and large investors cited the new Orderly Liquidation Authority as a key factor influencing their views. While several large investors viewed the resolution process as credible, others cited potential challenges, such as the risk that multiple failures of large firms could destabilize markets. Remaining market expectations of government support can benefit large bank holding companies if they affect investors’ and customers’ decisions. GAO analyzed the relationship between a bank holding company’s size and its funding costs, taking into account a broad set of other factors that can influence funding costs. To inform this analysis and to understand the breadth of methodological approaches and results, GAO reviewed selected studies that estimated funding cost differences between large and small financial institutions that could be associated with the perception that some institutions are too big to fail. Studies GAO reviewed generally found that the largest financial institutions had lower funding costs during the 2007-2009 financial crisis but that the difference between the funding costs of the largest and smaller institutions has since declined. However, these empirical analyses contain a number of limitations that could reduce their validity or applicability to U.S. bank holding companies. For example, some studies used credit ratings which provide only an indirect measure of funding costs. GAO’s analysis, which addresses some limitations of these studies, suggests that large bank holding companies had lower funding costs than smaller ones during the financial crisis but provides mixed evidence of such advantages in recent years. However, most models suggest that such advantages may have declined or reversed. GAO developed a series of statistical models that estimate the relationship between bank holding companies’ bond funding costs and their size or systemic importance, controlling for other drivers of bond funding costs, such as bank holding company credit risk. Key features of GAO’s approach include the following: • U.S. Bank Holding Companies: The models focused on U.S. bank holding companies to better understand the relationship between funding costs and size in the context of the U.S. economic and regulatory environment. • Bond Funding Costs: The models used bond yield spreads—the difference between the yield or rate of return on a bond and the yield on a Treasury bond of comparable maturity—to measure funding costs because they are a risk-sensitive measure of what investors charge bank holding companies to borrow.
Each year, OMB and federal agencies work together to determine how much the government plans to spend for IT and how these funds are to be allocated. Federal IT spending has risen to an estimated $65 billion in fiscal year 2008. OMB plays a key role in overseeing the implementation and management of federal IT investments. To improve this oversight, Congress enacted the Clinger-Cohen Act in 1996, expanding the responsibilities delegated to OMB and agencies under the Paperwork Reduction Act. Among other things, Clinger-Cohen requires agency heads, acting through agency chief information officers, to better link their IT planning and investment decisions to program missions and goals and to implement and enforce IT management policies, procedures, standards, and guidelines. The act also requires that agencies engage in capital planning and performance and results-based management. OMB’s responsibilities under the act include establishing processes to analyze, track, and evaluate the risks and results of major capital investments in information systems made by executive agencies. OMB must also report to Congress on the net program performance benefits achieved as a result of major capital investments in information systems that are made by executive agencies. In response to the Clinger-Cohen Act and other statutes, OMB developed policy for planning, budgeting, acquisition, and management of federal capital assets. This policy is set forth in OMB Circular A-11 (section 300) and in OMB’s Capital Programming Guide (supplement to Part 7 of Circular A-11), which directs agencies to develop, implement, and use a capital programming process to build their capital asset portfolios. Among other things, OMB’s Capital Programming Guide directs agencies to evaluate and select capital asset investments that will support core mission functions that must be performed by the federal government and demonstrate projected returns on investment that are clearly equal to or better than alternative uses of available public resources; institute performance measures and management processes that monitor actual performance and compare to planned results; and establish oversight mechanisms that require periodic review of operational capital assets to determine how mission requirements might have changed and whether the asset continues to fulfill mission requirements and deliver intended benefits to the agency and customers. To further support the implementation of IT capital planning practices as required by statute and directed in OMB’s Capital Programming Guide, we have developed an IT investment management framework that agencies can use in developing a stable and effective capital planning process. Consistent with the statutory focus on selecting, controlling, and evaluating investments, this framework focuses on these processes in relation to IT investments specifically. It is a tool that can be used to determine both the status of an agency’s current IT investment management capabilities and the additional steps that are needed to establish more effective processes. Mature and effective management of IT investments can vastly improve government performance and accountability. Without good management, such investments can result in wasteful spending and lost opportunities for improving delivery of services to the public. Only by effectively and efficiently managing their IT resources through a robust investment management process can agencies gain opportunities to make better allocation decisions among many investment alternatives and further leverage their investments. However, the federal government faces enduring IT challenges in this area. For example, in January 2004 we reported on mixed results of federal agencies’ use of IT investment management practices. Specifically, we reported that although most of the agencies had IT investment boards responsible for defining and implementing the agencies’ investment management processes, agencies did not always have important mechanisms in place for these boards to effectively control investments, including decision-making rules for project oversight, early warning mechanisms, and/or requirements that corrective actions for underperforming projects be agreed upon and tracked. Executive-level oversight of project-level management activities provides organizations with increased assurance that each investment will achieve the desired cost, benefit, and schedule results. Accordingly, we made several recommendations to agencies to improve their practices. In previous work using our investment management framework, we reported that the use of IT investment management practices by agencies was mixed. For example, a few agencies that have followed the framework in implementing capital planning processes have made significant improvements. In contrast, however, we and others have continued to identify weaknesses at agencies in many areas, including immature management processes to support both the selection and oversight of major IT investments and the measurement of actual versus expected performance in meeting established performance measures. For example, we recently reported that the Department of Homeland Security and the Department of Treasury did not have the processes in place to effectively select and oversee their major investments. To help ensure that investments of public resources are justified and that public resources are wisely invested, OMB began using its Management Watch List in the President’s fiscal year 2004 budget request, as a means to oversee the justification for and planning of agencies’ IT investments. This list was derived based on a detailed review of the investments’ Capital Asset Plan and Business Case, also known as the exhibit 300. The exhibit 300 is a reporting mechanism intended to enable an agency to demonstrate to its own management, as well as OMB, that a major project is well planned in that it has employed the disciplines of good project management; developed a strong business case for the investment; and met other Administration priorities in defining the cost, schedule, and performance goals proposed for the investment. We reported in 2005 that OMB analysts evaluate agency exhibit 300s by assigning scores to each exhibit 300 based on guidance presented in OMB Circular A-11. As described in this circular, the scoring of a business case consists of individual scoring for 10 categories, as well as a total composite score of all the categories. The 10 categories are project (investment) management, performance-based management system (including the earned value life-cycle costs formulation, and support of the President’s Management Agenda. Projects are placed on the Management Watch List if they receive low scores (3 or less on a scale from 1 to 5) in the areas of performance goals, performance-based management systems, security and privacy or a low composite score. According to OMB, agencies with weaknesses in these three areas are to submit remediation plans addressing the weaknesses. OMB officials also stated that decisions on follow-up and monitoring the progress are typically made by staff with responsibility for reviewing individual agency budget submissions, depending on the staff’s insights into agency operations and objectives. According to OMB officials, those Management Watch List projects that receive specific follow-up attention receive feedback through the passback process, targeted evaluation of remediation plans designed to address weaknesses, the apportioning of funds so that the use of budgeted dollars was conditional on appropriate remediation plans being in place, and the quarterly e-Gov Scorecards. OMB removes projects from the Management Watch List as agencies remediate the weaknesses identified with these projects’ business cases. As originally defined in OMB Circular A-11 and subsequently reiterated in an August 2005 memorandum, high risk projects are those that require special attention from oversight authorities and the highest levels of agency management. These projects are not necessarily “at risk” of failure, but may be on the list because of one or more of the following four reasons: The agency has not consistently demonstrated the ability to manage complex projects. The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. Delay or failure of the project would introduce for the first time unacceptable or inadequate performance or failure of an essential mission function of the agency, a component of the agency, or another organization. Most agencies reported that to identify high risk projects, staff from the Office of the Chief Information Officer compare the criteria against their current portfolio to determine which projects met OMB’s definition. They then submit the list to OMB for review. According to OMB and agency officials, after the submission of the initial list, examiners at OMB work with individual agencies to identify or remove projects as appropriate. According to most agencies, the final list is then approved by their Chief Information Officer. For the identified high risk projects, beginning September 15, 2005, and quarterly thereafter, Chief Information Officers are to assess, confirm, and document projects’ performance. Specifically, agencies are required to determine, for each of their high risk projects, whether the project was meeting one or more of four performance evaluation criteria: establishing baselines with clear cost, schedule, and performance goals; maintaining the project’s cost and schedule variances within 10 percent; assigning a qualified project manager; and avoiding duplication by leveraging inter-agency and governmentwide investments. If a high risk project meets any of these four performance evaluation criteria, agencies are instructed to document this using a standard template provided by OMB and provide this template to oversight authorities (e.g., OMB, agency inspectors general, agency management, and GAO) on request. Upon submission, according to OMB staff, individual analysts review the quarterly performance reports of projects with shortfalls to determine how well the projects are progressing and whether the actions described in the planned improvement efforts are adequate using other performance data already received on IT projects such as the e-Gov Scorecards, earned value management data, and the exhibit 300. OMB and federal agencies have identified approximately 227 IT projects— totaling at least $10.4 billion in expenditures for fiscal year 2008—as being poorly planned, poorly performing, or both. Figure 1 shows the distribution of these projects and their associated dollar values. Each year, OMB places hundreds of projects totaling billions of dollars on the Management Watch List. Table 1 provides a historical perspective of the number of these projects and their associated budget since OMB started reporting on the Management Watch List in the President’s budget request for 2004. The table shows that while the number of projects and their associated budget have generally decreased since then, they increased by 83 projects this year, and represent a significant percentage of the total budget. As of July 2007, 136 projects, representing $8.6 billion, still remained on the Management Watch List (see appendix 1 for complete list). We determined that 29 of these projects were on the Management Watch List as of September 2006. As of June 2007, when agencies last reported on their high risk projects to OMB, the 24 major agencies identified 438 IT projects as high risk, of which 124 had performance shortfalls collectively totaling about $6.0 billion in funding requested for fiscal year 2008. Table 2 shows that the number of projects, as well as the number of projects with shortfalls increased this year. OMB attributes this rise to increased management oversight by agencies. The majority of projects were not reported to have had performance shortfalls. In addition, five agencies—the departments of Energy, Housing and Urban Development, Labor, and State, and the National Science Foundation—reported that none of their high risk projects experienced any performance shortfalls. Figure 2 illustrates the number of high risk projects by agency as of June 2007, with and without shortfalls. Agencies reported cost and schedule variances that exceeded 10 percent as the greatest shortfall. This is consistent with what they reported about a year ago, and the distribution of shortfalls types is similar to last year. Figure 3 illustrates the reported number and type of performance shortfalls associated with high risk projects. Appendix II identifies the shortfalls associated with each of the poorly performing projects. Twenty-two high risk projects have experienced performance shortfalls for the past four quarters (see figure 4). Of these projects, the following six have had shortfalls since the High Risk List was established in September 2005. Department of Homeland Security’s (DHS) Secure Border Initiative Net Technology Program, which is expected to provide on-scene agents near real-time information on attempted border crossings by illegal aliens, terrorists, or smugglers; Department of Agriculture’s (USDA) Modernize and Innovate the Delivery of Agricultural Systems, which is intended to modernize the delivery of farm program benefits by deploying an internet-based self-service capabilities for customers, and eliminating the department’s reliance on aging technology and service centers as the sole means of delivering program benefits; Department of Veterans Affairs’ (VA) VistA Imaging, which should provide complete online patient data to health care providers, increase clinician productivity, facilitate medical decision-making, and improve quality of care; DHS’s Transportation Worker Identification Credentialing, which is to establish a system-wide common secure biometric credential, used by all transportation modes, for personnel requiring unescorted physical and/or logical access to secure areas of the transportation system; Department of Justice’s (DOJ) Regional Data Exchange, which is expected to combine and share regional investigative information and provide powerful tools for analyzing the integrated data sets; and VA’s Patient Financial Services System, which is expected create a comprehensive business solution for revenue improvement utilizing improved business practices, commercial software, and enhanced VA clinical applications. Thirty-three projects are on both the Management Watch List and list of high risk projects with shortfalls, meaning that they are both poorly planned and poorly performing. They total about $4.1 billion in estimated expenditures for fiscal year 2008. These projects are listed in table 3 below. OMB has taken steps to improve the identification and oversight of the Management Watch List and high risk projects by addressing some of the recommendations we previously made, but additional efforts are needed to more effectively perform these activities and ultimately ensure that potentially billions of taxpayer dollars are not wasted. Specifically, we previously recommended that OMB take action to improve the accuracy and reliability of exhibit 300s and application of the high risk projects criteria, and perform governmentwide tracking and analysis of Management Watch List and high risk project information. While OMB took steps to address our concerns, more can be done. In January 2006, we noted that the underlying support for information provided in the exhibit 300s was often inadequate and that, as a result, the Management Watch List may be undermined by inaccurate and unreliable data. Specifically, we noted that documentation either did not exist or did not fully agree with specific areas of all exhibit 300s; agencies did not always demonstrate that they complied with federal or departmental requirements or policies with regard to management and reporting processes; for example, no exhibit 300 had cost analyses that fully complied with OMB requirements for cost-benefit and cost- effectiveness analyses; and data for actual costs were unreliable because they were not derived from cost-accounting systems with adequate controls; in the absence of such systems, agencies generally derived cost information from ad hoc processes. We recommended, among other things, that OMB direct agencies to improve the accuracy and reliability of exhibit 300 information. To address our recommendation, in June 2006, OMB directed agencies to post their exhibit 300s on their website within two weeks of the release of the President’s budget request for fiscal year 2008. While this is a step in the right direction, the accuracy and reliability of exhibit 300 information is still a significant weakness among the 24 major agencies, as evidenced by a March 2007 President’s Council on Integrity and Efficiency and Executive Council on Integrity and Efficiency study commissioned by OMB to ascertain the validity of exhibit 300s. Specifically, according to individual agency reports contained within the study, Inspectors General found that the documents supporting agencies’ exhibit 300s continue to have accuracy and reliability issues. For example, according to these reports, the Agency for International Development did not maintain the documentation supporting exhibit 300s cost figures. In addition, at the Internal Revenue Service, the exhibit 300s were unreliable because, among other things, project costs were being reported inaccurately and progress on projects in development was measured inaccurately. In June 2006, we noted that OMB did not always consistently apply the criteria for identifying high risk projects. For example, we identified projects that appeared to meet the criteria but that were not designated as high risk. Accordingly, we recommended that OMB apply their high risk criteria consistently. OMB has since designated as high risk the projects that we identified. Further, OMB officials stated that they have worked with agencies to ensure a more consistent application of the high risk criteria. These are positive steps, as they result in more projects receiving the management attention they deserve. However, questions remain as to whether all high risk projects with shortfalls are being reported by agencies. For example, we have reported in our high risk series that the Department of Defense’s efforts to modernize its business systems have been hampered because of weaknesses in practices for (1) developing and using an enterprise architecture, (2) instituting effective investment management processes, and (3) establishing and implementing effective systems acquisition processes. We concluded that the department remains far from where it needs to be to effectively and efficiently manage an undertaking of such size, complexity, and significance. Despite these problems, Department of Defense (DOD), which accounts for $31 billion of the government’s $65 billion in IT expenditures, only reported three projects as being high risk with shortfalls representing a total of about $1 million. The dollar value of DOD’s three projects represents less than one tenth of one percent of high risk projects with shortfalls. In light of the problems we and others have identified with many of DOD’s projects, this appears to be an underestimation. Given the critical nature of high risk projects, it is particularly important to identify early on those that are performing poorly, before their shortfalls become overly costly to address. Finally, to improve the oversight of the Management Watch List projects, we recommended in our April 2005 report that the Director of OMB report to Congress on projects’ deficiencies, agencies’ progress in addressing risks of major IT investments, and management areas needing attention. In addition, to fully realize the potential benefits of using the Management Watch List, we recommended that OMB use the list as the basis for selecting projects for follow-up, tracking follow-up activities and analyze the prioritized list to develop governmentwide and agency assessments of the progress and risks of IT investments, identifying opportunities for continued improvement. We also made similar recommendations to the Director of OMB regarding high risk projects. Specifically, we recommended that OMB develop a single aggregate list of high risk projects and their deficiencies and use that list to report to Congress progress made in correcting high risk problems, actions under way, and further actions that may be needed. To its credit, OMB started publicly releasing aggregate lists of the Management Watch List and high risk projects in September 2006, and has been releasing updated versions on a quarterly basis by posting them on their website. While this is a positive step, OMB does not publish the specific reasons that each project is placed on the Management Watch List, nor does it specifically identify why high risk projects are poorly performing, as we have done in appendix II. Providing this information would allow OMB and others to better analyze the reasons projects are poorly planned and performing and take corrective actions and track these projects on a governmentwide basis. Such information would also help to highlight progress made by agencies or projects, identify management issues that transcend individual agencies, and highlight the root causes of governmentwide issues and trends. Such analysis would be valuable to agencies in planning future IT projects, and could enable OMB to prioritize follow-up actions and ensure that high-priority deficiencies are addressed. In summary, the Management Watch List and high risk projects processes play important roles in improving the management of federal IT investments by helping to identify poorly planned and poorly performing projects that require management attention. As of June 2007, the 24 major agencies had 227 such projects totaling at least $10 billion. OMB has taken steps to improve the identification of these projects, including implementing recommendations related to improving the accuracy of exhibit 300s and the application of the high risk projects criteria. However, the number of projects may be understated because issues concerning the accuracy and reliability of the budgetary documents the Management Watch List is derived from still remain and high risk projects with shortfalls may not be consistently identified. While OMB can act to further improve the identification and oversight of poorly planned and poorly performing projects, we recognize that agencies must also take action to fulfill their responsibilities in these areas. We have addressed this in previous reports and made related recommendations. Until further improvements in the identification and oversight of poorly planned and poorly performing IT projects, potentially billions in taxpayer dollars are at risk of being wasted. If you should have any questions about this testimony, please contact me at (202) 512-9286 or by e-mail at pownerd@gao.gov. Individuals who made key contributions to this testimony are Sabine Paul, Assistant Director; Neil Doherty; Amos Tevelow; Kevin Walsh and Eric Winter. The following provides additional detail on the investments comprising OMB’s Management Watch List as of July 2007. Under the Clinger-Cohen Act of 1996, agencies are required to submit business plans for IT investments to OMB. If the agency’s investment plan contains one or more planning weaknesses, it is placed on OMB’s Management Watch List and targeted for follow-up action to correct potential problems prior to execution. We estimated the fiscal year 2008 request based on the data in the Report on IT Spending for Fiscal Years 2006, 2007, and 2008 (generally referred to as exhibit 53), and data provided by agencies. The following provides additional detail on the high risk projects that have performance shortfalls as of June 2007. We estimated the fiscal year 2008 request based on the data in the Report on IT Spending for Fiscal Years 2006, 2007, and 2008 (generally referred to as exhibit 53), and data provided by agencies.
The Office of Management and Budget (OMB) plays a key role in overseeing federal information technology (IT) investments. The Clinger-Cohen Act, among other things, requires OMB to establish processes to analyze, track, and evaluate the risks and results of major capital investments in information systems made by agencies and to report to Congress on the net program performance benefits achieved as a result of these investments. OMB has developed several processes to help carry out its role. For example, OMB began using a Management Watch List several years ago as a means of identifying poorly planned projects based on its evaluation of agencies' funding justifications for major projects, known as exhibit 300s. In addition, in August 2005, OMB established a process for agencies to identify high risk projects and to report on those that are performing poorly. GAO testified last year on the Management Watch List and high risk projects, and on GAO's recommendations to improve these processes. GAO was asked to (1) provide an update on the Management Watch List and high risk projects and (2) identify OMB's efforts to improve the identification and oversight of these projects. In preparing this testimony, GAO summarized its previous reports on initiatives for improving the management of federal IT investments. GAO also analyzed current Management Watch List and high risk project information. OMB and federal agencies have identified approximately 227 IT projects--totaling at least $10.4 billion in expenditures for fiscal year 2008--as being poorly planned (on the Management Watch List), poorly performing (on the High Risk List with performance shortfalls), or both. OMB has taken steps to improve the identification and oversight of the Management Watch List and High Risk projects by addressing recommendations previously made by GAO, however, additional efforts are needed to more effectively perform these activities. Specifically, GAO previously recommended that OMB take action to improve the accuracy and reliability of exhibit 300s and consistent application of the high risk projects criteria, and perform governmentwide tracking and analysis of Management Watch List and high risk project information. In response to these recommendations, OMB, for example, started publicly releasing aggregate lists of Management Watch List and high risk projects by agency in September 2006 and has been updating them since then on a quarterly basis. However, OMB does not publish the reasons for placing projects on the Management Watch List, nor does it specifically identify why high risk projects are poorly performing. Providing this information would allow OMB and others to better analyze the reasons projects are poorly planned and performing, take corrective actions, and track these projects on a governmentwide basis. Such information would also help to highlight progress made by agencies or projects, identify management issues that transcend individual agencies, and highlight the root causes of governmentwide issues and trends. Until OMB makes further improvements in the identification and oversight of poorly planned and poorly performing IT projects, potentially billions in taxpayer dollars are at risk of being wasted.
Over the past decade, Congress and the executive branch have taken steps to improve the transparency of federal spending data. Congress passed and the President signed the Federal Funding Accountability and Transparency Act of 2006 (FFATA) to increase the availability of information about federal spending and improve the accountability over federal contracts and financial assistance awards. In response to FFATA, in December 2007, OMB established USAspending.gov to give the American public access to information on how their tax dollars are spent. More recently, the DATA Act, signed into law on May 9, 2014, expanded FFATA to link federal agency spending to federal program activities so that taxpayers and policymakers can more effectively track federal spending. To improve the quality of reported data, the DATA Act also requires that agency-reported award and spending information comply with new data standards that OMB and Treasury have established. The data standards, including the data elements, specify the items to be reported under the DATA Act and define and describe what is to be included in each element with the aim of ensuring that information will be consistent and comparable. The DATA Act technical schema, developed by Treasury, details the specifications for the format, structure, tagging, and transmission of each data element. The DATA Act requires GAO to issue reports in 2017, 2019, and 2021, assessing and comparing the quality of data submitted under the DATA Act as well as agency implementation and use of data standards. As we have previously reported, the DATA Act, if fully and effectively implemented, holds great promise for improving the transparency and accountability of federal spending data by providing consistent, reliable, and complete data on federal spending. In May 2015, OMB issued OMB Memorandum M-15-12 to federal departments and agencies directing them to submit DATA Act implementation plans to OMB. OMB directed that agencies submit their implementation plans concurrent with their fiscal year 2017 budget requests that were due September 14, 2015. In June 2015, OMB issued DATA Act Implementation Plans Guidance to assist agencies in completing their implementation plans. According to the guidance, agency implementation plans were to include four parts: (1) a timeline of tasks and steps that graphically displays the major milestones the agency expects to complete as part of the implementation process, (2) a cost estimate that includes costs for each activity and step in the timeline, (3) a narrative that summarizes the steps the agency will take to implement the DATA Act and any foreseeable challenges, and (4) a detailed project plan that reflects the major milestones in the agency’s timeline and expands on the narrative. In June 2015, Treasury issued the DATA Act Implementation Playbook (Version 1.0) that contains an explanation of the eight suggested steps and a timeline for agencies to use as they began to develop their plan for DATA Act implementation. Step 1: Organize team and create an agency DATA Act work group including affected communities and identify senior accountable official (by spring 2015). Step 2: Review list of elements and participate in data standardization process (by spring 2015). Step 3: Perform inventory of agency data and associated business processes (February 2015 to September 2015). Step 4: Design and strategize changes to systems and business processes to capture complete, multilevel data (e.g., summary and award detail) and prepare cost estimates for fiscal year 2017 budget projections (March 2015 to September 2015). Step 5: Execute broker to map agency data to DATA Act schema, implement system changes, and extract data (October 2015 to February 2016). Step 6: Test broker implementation and outputs to ensure that data are valid (October 2015 to February 2016). Step 7: Update systems and implement other systems changes (October 2015 to February 2017). Step 8: Submit data and update and refine process (March 2016 to May 2017). The DATA Act Implementation Playbook (Version 1.0) indicates that agencies will be working on steps 5 through 8 throughout fiscal years 2016 and 2017. These eight steps were to be discussed in the narrative section of agency implementation plans. On June 24, 2016, Treasury issued DATA Act Implementation Playbook (Version 2.0) which includes, among other things, expanded guidance on actions and steps to be included in steps 5 through 8. In December 2015, OMB issued clarifying guidance in the form of a two- page Controller Alert that was narrowly focused on three areas of concern—a requirement to comply with data standards, a requirement to link award and account-level data, and a requirement to identify funding and awarding offices for financial assistance awards. In April 2016, Treasury issued technical requirements for implementation, including version 1.0 of the technical schema known as the DATA Act Information Model Schema. This includes technical guidance for federal agencies about what data to report to Treasury, including the authoritative sources of the data elements and the submission format. In May 2016, OMB issued Additional Guidance for DATA Act Implementation: Implementing Data-Centric Approach for Reporting Federal Spending Information in Management Procedures Memorandum No. 2016-03. This memorandum provided additional guidance on new federal prime award reporting requirements and agency assurances and authoritative sources for reporting. Recently, a Treasury official testified that Treasury and OMB were leading implementation of the DATA Act with the goal of providing more accessible, searchable, and reliable spending data for the purposes of promoting transparency, facilitating better decision making, and improving operational efficiency. The Treasury official also previously testified that a well-thought-out implementation plan is one of the key factors to successful implementation. He stated that the plan Treasury had developed, in partnership with OMB, not only reflected the requirements and intent of the law, but it will also lead to a more data-driven government. OMB and Treasury have formed a governance framework that provides initial structures for project management and development of data standards, with OMB being the lead agency for policy decisions and Treasury being the lead agency for technical issues and decisions regarding DATA Act implementation. At the top of this framework is an Executive Steering Committee—consisting of OMB’s Controller and Treasury’s Fiscal Assistant Secretary. The Executive Steering Committee sets overall policy and guidance, oversees recommendations, and makes key decisions affecting government-wide implementation of the act. OMB staff and Treasury officials stated that they have established a joint partnership for DATA Act implementation, and one of their joint activities was reviewing agency implementation plans. According to OMB staff, OMB and Treasury are using an iterative approach for DATA Act implementation. According to the PMBOK® Guide, iterative processes are generally preferred when an organization needs to manage changing objectives and scope, to reduce the complexity of a project, or when the partial delivery of a product is beneficial and provides value for one or more stakeholder groups without impact to the final deliverable or set of deliverables. According to Treasury officials, the Treasury Program Management Office is using an agile approach to develop a mechanism for agencies to report spending data and make changes to USAspending.gov. According to the PMBOK® Guide, adaptive processes include agile methods and are intended to respond to high levels of change and ongoing stakeholder involvement. Adaptive methods are also both iterative and incremental. OMB’s Digital Service team outlines agile and iterative guidance in the U.S. Digital Services Playbook, which reflects the principles that OMB and Treasury have stated they are using in their approaches to DATA Act implementation. Although OMB directed federal agencies to submit implementation plans (through issuance of OMB Memorandum M-15-12), as of July 2016, OMB had not determined the complete population of agencies that are required to report spending data under the DATA Act and submit implementation plans to OMB. Further, OMB and Treasury have not fully documented processes and controls for reviewing and using agencies’ DATA Act implementation plans to facilitate and monitor agencies’ progress against the implementation plans, to provide feedback to agencies, and to respond to reported challenges. In addition, OMB and Treasury initially informed us that they were not planning to require or request that agencies submit updated implementation plans for review that would consider new technical requirements and guidance that were released. However, on June 15, 2016, OMB requested that the 24 CFO Act agencies submit updates to key components of their implementation plans by August 12, 2016. Not knowing the complete population of agencies that are required to report under the DATA Act and not having fully documented processes and controls for reviewing and using agency DATA Act implementation plans increase the risk that the purposes and benefits of the DATA Act may not be fully achieved and could result in incomplete spending data being reported. Further, without updated implementation plans, including revised cost estimates and project plans to reflect the impacts of new technical requirements and guidance, from all agencies that are required by the DATA Act to report spending data, OMB and Treasury may not have the information needed to assist them in properly monitoring resource needs and agencies’ progress in implementing new requirements government-wide. As of July 2016, OMB had not yet determined the complete population of agencies required to report under the DATA Act. According to OMB staff, OMB had not made this determination because of differing interpretations of how the DATA Act defines “federal agencies.” This issue is not entirely new. We reported in June 2014, among other things, that it was unclear which agencies were required to report award data in accordance with FFATA because of differing interpretations of the funds that were exempt from reporting. We also reported that without clear OMB guidance to define the types of funds exempt from reporting, it is unclear whether justifications from the agencies for not reporting are appropriate. OMB generally agreed with our recommendations to clarify guidance on reporting award information and stated that the recommendations were consistent with actions required by the DATA Act, but our recommendations have yet to be fully addressed. Similarly, OMB and Treasury annually prepare the U.S. government’s consolidated financial statements, which requires the identification of the complete population of agencies that are required to report their annual audited financial information. As a result, Treasury has established a set of controls and procedures to validate the completeness of this population of agencies and help ensure that financial information is received from all agencies required to report. Leveraging this existing process and controls with appropriate modifications could help establish a complete population of agencies required to report under the DATA Act and reduce the risk of incomplete data being reported. On April 19, 2016, OMB’s Controller testified that OMB would provide Congress with its determination of the population of agencies required to report under the DATA Act, while emphasizing that the 24 CFO Act agencies—which both OMB and the CFO Act agencies agree are all required to report—represent about 90 percent of federal spending. In May 2016, OMB and Treasury published guidance in the form of frequently asked questions to help federal agencies determine whether they are required to comply with the DATA Act. In addition, OMB staff stated that the agencies’ general counsels could work with OMB to help agencies make these determinations. However, OMB does not have a process or plan in place to validate agency determinations and on a periodic basis request updated agency determinations or initial determinations for newly formed federal entities. The PMBOK® Guide, a globally recognized standard for project management, includes defining the scope of the project—in this case, the population of agencies that are required to report. In addition, Standards for Internal Control in the Federal Government states that management— in this case, OMB—should periodically review policies, procedures, and related control activities for continued relevance and effectiveness in achieving the entity’s objectives—in this case, successful implementation of the DATA Act. It is important for OMB and Treasury to determine on a periodic basis the complete population of agencies that are required to comply with DATA Act reporting requirements so that OMB and Treasury can follow up with agencies that have not reported and help ensure that they comply. Without determining the complete population of agencies required to report under the DATA Act—including submitting an implementation plan and reporting spending data—there is an increased risk that financial and nonfinancial information reported to USAspending.gov may be incomplete. OMB and Treasury have not fully documented processes or designed and implemented controls specifically related to reviewing and using agency implementation plans to facilitate and monitor agencies’ progress against their implementation plans, to provide feedback to agencies, and to respond to reported challenges. OMB and Treasury officials confirmed that they do not have documented policies and procedures, or processes and controls, specifically for reviewing agency DATA Act implementation plans. OMB staff noted that the purpose for directing agencies to submit implementation plans was to use the implementation plan cost estimates to assist them in formulating the fiscal year 2017 budget. OMB staff stated that they have a documented process for budget formulation and that it was sufficient for their review of agency implementation plans. However, given the goals of the DATA Act, fully documented processes and controls for reviewing all information in the agencies’ implementation plans, such as reported challenges, implementation plan timelines, and detailed project plans, would be useful for facilitating agency implementation of DATA Act requirements and monitoring agency progress. Treasury officials, in their role working with OMB to lead DATA Act implementation, also reviewed agencies’ implementation plans and provided us with a list of the agency plans they reviewed. They described their review of implementation plans as a point-in-time review, the objective of which was to identify overarching government-wide issues and identify agencies that could benefit from one-on-one discussions with Treasury about their plans. A Treasury official noted that Treasury’s review of implementation plans was meant to encourage agencies to start considering how they would implement the DATA Act and facilitate initial discussions with the agencies. OMB and Treasury officials noted that the implementation plan is not their only tool for facilitating and monitoring agency implementation of the DATA Act. A Treasury official stated that they have a more holistic approach for engaging agencies through other tools and mechanisms, including office hours, sessions for agencies to test their data files, webinars, and interactive communication tools, which led to the development of the technical requirements and updates to USAspending.gov. OMB and Treasury described their processes for providing, documenting, and sharing feedback given to agencies in October and November 2015 regarding the agencies’ implementation plans. OMB and Treasury officials told us that based on their initial reviews of the agencies’ implementation plans, they provided feedback to certain agencies as needed through written comments, e-mails, discussions, conferences and forums, monthly calls with senior accountable officials, weekly project management office hours, and workshops. A Treasury official provided us with examples of the written feedback provided to selected agencies based on Treasury’s review of agencies’ implementation plans. However, OMB and Treasury did not provide us with documentation detailing the results of their assessments of government-wide issues or the possible impacts to their DATA Act planning activities or timelines resulting from their review of agencies’ implementation plans. In April 2016, the Controller of OMB testified that OMB and Treasury are monitoring and tracking agency progress against implementation plans through May 2017. The Controller stated that he and the Fiscal Assistant Secretary of Treasury are leading readiness discussions to encourage timely DATA Act implementation. OMB plans to complete these senior-level discussions in July 2016. However, OMB staff confirmed that they do not have documented policies and procedures for conducting or communicating the results of these discussions. OMB staff told us that these are unstructured, high-level reviews tailored to individual agencies. OMB staff stated that the agencies come forward with particular concerns, identify risks they are facing, and where they need help from OMB and Treasury. According to Standards for Internal Control in the Federal Government, management should design control activities to achieve objectives and respond to risks. Applying these standards to this situation suggests that procedures for reviewing agencies’ implementation plans, including how to use information such as reported challenges in agencies’ plans, are control activities that would help OMB and Treasury achieve their objective to lead efforts to implement the DATA Act. In addition, according to the PMBOK® Guide, monitoring project performance should be done consistently and regularly, and should include tracking and reviewing the progress and performance of the project, identifying areas where changes are required, and initiating corresponding changes. It should also include collecting, measuring, and distributing performance information and assessing measurement and trends to effect process improvements. While OMB and Treasury staff noted that they also use other tools and mechanisms, such as office hours and webinars, to help support agencies in their implementation efforts, a well-developed agency implementation plan would be a key tool for OMB and Treasury to use to better monitor agencies’ efforts to implement the DATA Act. Without clearly documented processes and controls for reviewing and using agency implementation plans to facilitate and monitor agencies’ progress against their implementation plans, OMB and Treasury may not be able to fully determine resources, guidance, or other agency needs requiring actions by OMB and Treasury for the successful implementation of the DATA Act government-wide. OMB and Treasury staff initially told us that they were not going to require or request that agencies submit updated implementation plans for review although new requirements and information were subsequently released, such as additional guidance on DATA Act reporting that OMB issued in May 2016. The Controller of OMB later testified in April 2016 that OMB would request updated implementation plans from the agencies in June or July of 2016 after the technical schema has been issued; Treasury subsequently issued the schema on April 29, 2016. In a June 15, 2016 memorandum to CFO Act agencies, OMB requested that those agencies submit updated information on key components of their implementation plans by August 12, 2016. OMB requested that the updated information include a timeline with updated milestones and a narrative explaining the milestones, the agency’s progress to date, and updated risks and a risk mitigation strategy. OMB also requested updated information on the CFO Act agencies’ resources—funds spent on the effort to date as well as estimated total future spending. According to the PMBOK® Guide, updates arising from approved changes during the project may significantly affect parts of the project management plan and the project documents. Updates to these documents provide greater precision with respect to schedule, costs, and resource requirements to meet the defined project scope. For example, certain CFO Act agencies reported cost estimates for DATA Act implementation that ranged from $387,000 to $38.8 million for fiscal years 2015 through 2018, but these estimates are likely to change as a result of the technical requirement changes and additional guidance issued in April and May 2016. Furthermore, since agencies will be implementing steps 5 through 8—execute, test, update, and submit data—in the DATA Act Implementation Playbook (Version 2.0) throughout fiscal years 2016 and 2017, additional focus and details for these steps in the agency implementation plans may be needed to help accomplish those steps as the May 2017 implementation date draws nearer. OMB staff stated that they have focused OMB’s implementation efforts on the CFO Act agencies as they account for a large majority of federal government spending. OMB’s recent request for updated implementation plans from CFO Act agencies is a step in the right direction. However, the DATA Act is a government-wide initiative requiring full reporting of federal spending data that includes reporting beyond that of the CFO Act agencies. As discussed in this report, both CFO Act and other federal agencies submitted implementation plans to OMB. With the recent issuance of additional guidance and changes to technical requirements, agencies should be able to provide more extensive information in their project plans for completing steps 5 through 8 in OMB and Treasury’s implementation plan guidance. Without updated agency implementation plans from all agencies required to report under the DATA Act, including revised timelines and milestones, cost estimates, and updated risks that reflect the impacts of new technical requirements and guidance, OMB and Treasury may not have the information needed to assist them in properly monitoring resource needs and agencies’ progress in implementing new requirements government-wide. None of the 42 implementation plans we received and reviewed contained all 51 plan elements described in OMB and Treasury guidance. OMB’s DATA Act Implementation Plans Guidance outlined four categories of information to be included in agency implementation plans: (1) timeline, (2) cost estimate, (3) narrative, and (4) project plan. Based on OMB’s DATA Act Implementation Plans Guidance and Treasury’s Data Act Implementation Playbook (Version 1.0), we identified 51 plan elements to be reported within these four categories. Appendix II lists the 51 plan elements in each category and the percentage of the 42 agencies that included each element in their implementation plans, as well as the combined average of agencies that included the elements in each of the four categories. Descriptions of the categories and the number of plan elements in each category are shown in table 1. Based on our review of agency implementation plans, we found that none of the 42 agencies’ plans included all of the 51 plan elements. For example, many agencies’ cost estimates were incomplete as the agencies did not provide assumptions for their estimates; identify resource requirements, such as full-time and part-time employees needed to assist implementation; or differentiate between their business process costs and technology costs. Table 2 shows the average inclusion rate— the average of the percentages of specific plan elements in a category that the 42 agencies included in their implementation plans. As shown above, the plan elements that were most often included were those in the timeline category. We determined that the average inclusion rate across the 11 plan elements of the timeline category was 74 percent. Of the four categories, the project plan category had the lowest average inclusion rate. As shown in table 3, the 5 plan elements most often included in agency implementation plans were related to milestone dates, tasks, and changes to information technology systems (in the timeline category) and information on conducting an inventory of agency data (in the narrative category). As shown in table 4, the 5 plan elements that were most often not included in agency implementation plans were information on resource needs and dependencies (in the project plan category), information on which steps can be or have been done with existing resources (in the cost estimate category), notation on the project plan of which steps require OMB and Treasury action (in the project plan category), and information relating to procedures for verifying the completeness of data submitted to Treasury (in the narrative category). OMB guidance also requested that agencies report cost savings, if any. However, our review of the implementation plans found that no agencies reported cost savings. In our review of the 42 agency implementation plans, we found that the amount of information varied based on whether the agency was a CFO Act agency and whether the agency referred to a shared service provider. For example, we found that CFO Act agency plans were generally more complete than non-CFO Act agency plans, which highlights the importance of obtaining updated implementation plans from non-CFO Act agencies, as previously discussed. Table 5 shows differences between the 24 CFO Act agencies’ and the 18 non-CFO Act agencies’ implementation plans in their average inclusion rates across the four categories of DATA Act implementation plan elements. We also found that less information was provided in the plans for agencies that indicated they used a shared service provider than for those agencies that did not indicate they used a shared service provider. Agencies that indicated they used a shared service provider often made reference to their shared service provider’s implementation plan instead of identifying what steps the agency would take for a particular plan element to implement the DATA Act. Of the 42 agency implementation plans we reviewed, we found there were 26 agencies that did not make reference to using shared service providers and 16 agencies that referred to using a shared service provider for implementation. Table 6 shows that the average inclusion rate among agencies not referencing use of a shared service provider was higher for all four categories than among those agencies that referenced using a shared service provider. Given the lack of consistent and complete agency implementation plans, it may be difficult for OMB and Treasury to determine whether agencies will be able to implement the data standards finalized by OMB and Treasury in August 2015. In addition, the implementation plans we reviewed are now not up-to-date and do not address the new technical requirements issued by Treasury in April 2016 and the guidance issued by OMB in May 2016 or provide all the necessary details needed to implement steps 5 through 8 of the DATA Act Implementation Playbook (Version 2.0). In June 2016, OMB requested that CFO Act agencies submit updated information on key components of their implementation plans by August 12, 2016. While OMB and Treasury noted that they have other tools and mechanisms to monitor agencies’ implementation efforts, without updated implementation plans from all agencies required to report under the DATA Act, it is unclear whether OMB and Treasury will have sufficient information to determine the full range of resources and guidance that will be needed to help ensure the successful government- wide implementation of DATA Act requirements by the May 2017 deadline. Although OMB and Treasury have issued data standards and provided guidance and feedback to federal agencies on their DATA Act implementation plans, as discussed above, our work indicates that challenges remain and will need to be addressed to successfully implement the DATA Act government-wide. OMB’s DATA Act Implementation Plans Guidance was issued to the agencies, detailing what should be included in their implementation plans and asking agencies to describe any potential difficulties or foreseeable challenges that could hinder their implementation of the DATA Act. This guidance also encouraged agencies to provide suggestions to mitigate the challenges they foresee. As we testified in April 2016, our review of the 42 agency implementation plans we received, dated from August 2015 through January 2016, provides insight into the challenges agencies face as well as the mitigation strategies they suggest to address them. Based on our analysis of the agencies’ implementation plans, we believe that the challenges and mitigation strategies reported provide important insight as to the level of effort, communication, collaboration, and resources needed to successfully implement the DATA Act government- wide. Based on the results of our review of the 42 agency implementation plans, we identified seven overarching categories of challenges reported by agencies to effectively and efficiently implementing the DATA Act, as shown in table 7. The results of our review of the 42 agency implementation plans we received found that 31 agencies reported specific challenges, some of which may overlap with multiple categories. As shown in figure 1, agencies most frequently reported challenges with competing priorities, systems integration, and resources. See appendix III for examples of the types of challenges agencies reported in each category. The results of our review found that 26 agencies reported in their implementation plans mitigation strategies to address challenges. Some strategies discussed in the agency implementation plans address multiple challenges. As shown in figure 2, agencies reported crosscutting mitigation strategies to address specific areas of concern most frequently with respect to leveraging existing resources and communication and information sharing. See appendix III for examples of the mitigating strategies agencies reported in each category. Overall, our work indicates that agency implementation plans contain valuable information on a variety of challenges in implementing the DATA Act, including a lack of funding, inadequate guidance, tight time frames, competing priorities, and system integration issues. Agencies reported working closely with internal and external stakeholders to address these challenges as effectively as possible, but also reported that additional support from OMB and Treasury is needed for successful implementation of the DATA Act. Managing and overseeing government-wide projects such as DATA Act implementation requires a governance framework that includes structures for both project management and data governance. Agency DATA Act implementation plans are one of the tools that OMB and Treasury use to facilitate implementation of the DATA Act. However, they do not have fully documented processes and controls for reviewing and using agency implementation plans to monitor agencies’ progress against their plans, provide needed guidance or resources, and respond to challenges reported by the agencies. In addition, as of July 2016, OMB had not yet determined the complete population of federal agencies that are required to report spending data under the DATA Act and only requested that CFO Act agencies submit updated implementation plans to OMB. As a result, OMB and Treasury may not be fully informed of government-wide issues or concerns, which may impair their ability to help ensure that all agencies have the full range of resources and guidance needed to fully achieve the purposes and benefits of the DATA Act. In addition, without updated implementation plans from all agencies required to report under the DATA Act that reflect the impacts of new technical requirements and guidance on timelines and milestones, cost estimates, and risks, OMB and Treasury may not have complete information to properly monitor resource needs and progress in implementing new requirements government-wide. To sustain the progress that has been made, addressing these concerns will become even more important as the May 2017 agency implementation date draws nearer. To help ensure effective government-wide implementation and that complete and consistent spending data will be reported as required by the DATA Act, we recommend that the Director of OMB, in collaboration with the Secretary of the Treasury, take the following two actions related to oversight and monitoring of agencies’ progress: establish or leverage existing processes and controls to determine the complete population of agencies that are required to report spending data under the DATA Act and make the results of those determinations publicly available and reassess, on a periodic basis, which agencies are required to report spending data under the DATA Act and make appropriate notifications to affected agencies. To help ensure effective implementation of the DATA Act by the agencies and facilitate the further establishment of overall government-wide governance, we recommend that the Director of OMB, in collaboration with the Secretary of the Treasury, take the following three actions related to monitoring and use of agency implementation plans: establish documented policies and procedures for the periodic review and use of agency implementation plans to facilitate and monitor agency progress against the plans; request that non-CFO Act agencies required to report federal spending data under the DATA Act submit updated implementation plans, including updated timelines and milestones, cost estimates, and risks, to address new technical requirements; and assess whether information or plan elements missing from agency implementation plans are needed and ensure that all key implementation plan elements are included in updated implementation plans. We provided a draft of this report to the Director of OMB and the Secretary of the Treasury for review and comment. Both OMB and Treasury submitted written comments that are discussed below and reprinted in appendixes IV and V, respectively. In addition, OMB and Treasury provided technical comments, which we incorporated as appropriate. In its written comments, OMB generally concurred with our recommendations related to determining the population of agencies required to report under the DATA Act, but OMB stated that it maintains that each agency is responsible for determining whether it is subject to the DATA Act. OMB also stated that it and Treasury issued frequently asked questions clarifying the legal framework under which an agency would be subject to reporting and that agencies may consult with OMB for additional counsel. Although OMB agreed that complete reporting from federal agencies is a critical component of successful DATA Act implementation, we still have concerns about whether and how OMB, in coordination with Treasury, will help ensure completeness of the information reported at the government-wide level. In addition, OMB generally concurred with our recommendations related to the monitoring and use of agency implementation plans. OMB reiterated that it considered the initial implementation plans in the budget formulation process and used the plans for resource planning purposes. OMB also noted other outreach efforts we discussed in this report, including OMB and Treasury’s recent progress meetings (i.e., readiness discussions) held with each CFO Act agency’s senior accountable official and OMB’s request to CFO Act agencies for updates to their implementation plans to complement these meetings. OMB agreed that a more formalized process should be established for reviewing agency updates to implementation plans and stated that it would work to systematically report on the contents of the implementation plan updates. However, we are still concerned about OMB focusing primarily on the 24 CFO Act agencies. In its response, OMB reiterated its view that because the 24 CFO Act agencies represent over 90 percent of federal spending, they provide OMB with the visibility needed to address significant implementation challenges. We recognize that the CFO Act agencies represent the majority of federal spending, but as we discussed in this report, the DATA Act is a government-wide initiative requiring full reporting of all federal spending. Without updated implementation plan information from all agencies, OMB may not have all the information it needs to monitor resource needs and progress government-wide. In its written comments, Treasury noted that OMB would separately respond to the recommendation related to determining the population of agencies required to report under the DATA Act and that Treasury will continue to collaborate with and assist OMB on such matters. Regarding our recommendations related to agency implementation plans, Treasury stated that because OMB is requesting the updated implementation plan information, Treasury would defer to OMB on a decision to expand the request to non-CFO Act agencies and on the monitoring of the completeness of implementation plans. To the extent that Treasury undertakes a detailed review of updates to agency implementation plans in the future, Treasury stated that it will establish documented policies and procedures for its review of those plans. Treasury agreed that it has a responsibility to monitor agency progress and stated that it remains committed to that effort. We are sending copies of this report to the Director of the Office of Management and Budget, the Secretary of the Treasury, and appropriate congressional addressees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9816 or rasconap@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. This review is part of an ongoing effort to provide interim reports on the progress being made in the implementation of the Digital Accountability and Transparency Act of 2014 (DATA Act), while also meeting our reporting requirements mandated by the act. The objectives of this review were to determine (1) the extent to which the Office of Management and Budget (OMB) and the Department of the Treasury (Treasury) have processes and controls in place to review agencies’ implementation plans, monitor agencies’ progress, provide feedback to the agencies, and respond to challenges reported by the agencies; (2) the extent to which selected federal agencies’ DATA Act implementation plans were prepared in accordance with OMB and Treasury guidance; and (3) challenges agencies have reported that may affect their ability to implement the DATA Act and mitigating strategies they have reported to address such challenges. To address the first objective, we interviewed cognizant OMB and Treasury officials and requested supporting documentation to further understand the processes and internal controls that OMB and Treasury have related to their (1) reviewing of agencies’ implementation plans, (2) monitoring of agencies’ progress and providing of feedback on the implementation plans, and (3) responding to challenges reported by the agencies. Specifically, we made inquiries of OMB and Treasury officials on the processes they used to analyze agency implementation plans and how they communicated the results of their reviews government-wide and to individual agencies. We also made inquiries about their actions taken in response to the issues identified and the extent to which their reviews assist agencies in implementing the DATA Act. Further, we reviewed examples Treasury provided to us of correspondence between Treasury and agencies discussing feedback on agency implementation plans. We had discussions with OMB and Treasury officials to determine if there were any updates or revisions to agencies’ implementation plans or implementation status reports. We used Standards for Internal Control in the Federal Government and the Project Management Institute’s A Guide to The Project Management Body of Knowledge (PMBOK® Guide) to assess OMB’s and Treasury’s processes and controls that were in place from November 2015 through July 2016. For our second objective, we requested agencies’ DATA Act implementation plans from OMB and, at OMB’s request, requested them directly from 51 agencies that we identified based primarily on a listing of agencies in an OMB information system used to support OMB’s federal management and budget processes. The 51 agencies we identified included the 24 Chief Financial Officers (CFO) Act agencies, 13 other agencies significant to the Fiscal Year 2014 Financial Report of the United States Government, and 14 smaller federal agencies. However, we note that the 51 agencies we identified may not be all of the agencies required to report under the DATA Act. We received plans from 42 of these agencies; 9 agencies did not submit their plans for various reasons (see table 8). We did not validate the agencies’ determination that the DATA Act was not applicable to them or review shared service providers’ implementation plans because it was not within the scope of the audit. We reviewed OMB and Treasury guidance—OMB Memorandum M-15- 12, DATA Act Implementation Plans Guidance and DATA Act Implementation Playbook (Version 1.0). Based on this guidance and the PMBOK® Guide, we identified 51 specific plan elements for inclusion in an agency’s implementation plan if it was prepared in accordance with the guidance. The 51 plan elements were grouped into four separate categories: (1) timeline, (2) cost estimate, (3) narrative, and (4) project plan. According to OMB’s DATA Act Implementation Plans Guidance, agencies’ implementation plans should consist of multiple parts: (1) a timeline of tasks and steps toward implementing the requirements of OMB Memorandum M-15-12; (2) an estimate of costs to implement these tasks and steps; (3) a detailed narrative that explains the required steps the agency will take to implement the DATA Act, identifies the underlying assumptions, and outlines the potential difficulties and risks to successfully implement the plan; and (4) a detailed project plan that agencies will develop over time. See appendix II for a list of the 51 plan elements. We did not evaluate the quality of the information provided in the agencies’ plans, such as whether the implementation plan steps were sufficient to achieve successful implementation by the agencies, as this was outside the scope of this review. We reviewed OMB and Treasury guidance to agencies on preparing DATA Act implementation plans, assessed it against the PMBOK® Guide, and found that it was generally consistent. We then reviewed the implementation plans using a data collection instrument to document our assessment of the extent to which the plans contained the 51 plan elements. Appendix II contains the overall results of our review. For the third objective, we reviewed the 42 federal agency DATA Act implementation plans to identify any challenges and mitigating strategies reported by the agencies. We did not assess the significance of the challenges or merits of the mitigating strategies reported in the agencies’ plans. We also reviewed the 24 CFO Act agencies’ performance reports and agency financial reports for fiscal years 2014 and 2015, as well as the 27 other agencies’ financial reports available for fiscal year 2015, to identify any additional challenges or mitigating strategies reported; none were noted. We analyzed the information obtained and identified common themes and categories of challenges and mitigating strategies that the agencies reported. We coordinated our audit efforts with the inspector general (IG) community through monthly working group meetings to promote an efficient and effective audit process and avoid duplication of audit efforts. We plan to communicate the results of our review to individual agency IGs (upon request) to help inform their readiness reviews on issues or potential risk areas. The objective of IG readiness reviews is to allow an agency’s IG to gain an understanding of the agency’s processes and procedures implemented or planned to be implemented, and to assess and report on the quality and use of data standards of the financial and payment data in accordance with the requirements of the DATA Act. We conducted this performance audit from November 2015 to July 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As we testified in April 2016, we identified seven overarching categories of challenges to effectively and efficiently implementing the Digital Accountability and Transparency Act of 2014 (DATA Act) as reported in agencies’ implementation plans. In our review of the 42 agency implementation plans we received, we found that 31 agencies reported specific challenges, some of which may overlap with multiple categories. We also found that 26 agencies identified mitigation strategies to address challenges, as suggested by Office of Management and Budget (OMB) guidance. Some strategies discussed in the agency implementation plans address multiple challenges. The following examples of agency- reported challenges were included in our April 2016 testimony, as well as some of the mitigating strategies reported. Competing priorities. Of the 31 agencies reporting challenges in their implementation plans, 23 reported competing statutory, regulatory, or policy priorities that could potentially affect DATA Act implementation. One competing priority certain agencies reported is meeting the requirements of OMB Circular No. A-11, which provides agencies with guidance on the budget process, including how to prepare and submit required materials for budget preparation and execution. For example, requirements on “object class” and “program activity” reporting create competing priorities both for the agency’s software vendors and for the agency’s internal resources. The agency noted that staff with knowledge needed to understand and comment on new DATA Act data element one agency noted that the different timelines for OMB Circular No. A‐11 definitions are the same staff required to work on the new Circular No. A‐ Circular No. A‐11 changes is severely inhibited. 11 reporting requirements. The agency added that its ability to engage effectively on the DATA Act requirements while working to implement the Another competing priority some agencies reported is the data requirement set forth in the Federal Acquisition Regulation (FAR). Specifically, in October 2014 the FAR was amended to standardize the format of the Procurement Instrument Identifier (PIID) that must be in effect for new awards issued beginning in October 2017. The PIID must be used to identify all solicitation and contract actions, and agencies must ensure that each PIID used is unique government-wide for at least 20 years from the date of the contract award. Some agencies reported that they were concerned about the amount of effort involved in also implementing the PIID for the DATA Act. For example, one agency noted that it had implemented a standard PIID and developed processes and systems to handle the new identifiers to meet the FAR requirements, but the extent of any changes necessary to implement the PIID for the DATA Act, which also requires a unique identifier, is unknown. Another agency noted that this initiative and other agency initiatives will compete for many of the same resources, including subject matter experts. Systems integration. Systems integration is another challenge reported by 23 agencies in their implementation plans. Some agencies noted concerns about their systems’ ability to obtain and easily submit to the Department of the Treasury (Treasury) all the data elements needed to implement the DATA Act, including the requirement to establish a unique award identification number. For example, one agency reported that it does not have a systematic link to pull data from multiple systems by a unique award ID and it does not have an automated grants management system because the agency noted that it reports grants data manually using spreadsheets. This agency noted that it needs to replace its financial system and modify supporting systems to fully comply with the DATA Act. Another agency noted that five of the required data elements are not included in its procurement and financial assistance system. As a result, the agency noted that it will have to modify its system’s software to include these elements in order to comply with the DATA Act. These statements from agency implementation plans indicate that given the vast number and complexity of systems government-wide that are potentially involved in DATA Act implementation efforts, agencies may face a variety of challenges related to systems integration. Resources. Limited resources are another concern reported by 22 agencies in their implementation plans. Agencies frequently identified funding and human resources as needs for efficient and effective implementation. For example, one agency noted that the execution of its implementation plan greatly depends on its receiving the requisite funding and human resources as estimated in the plan, and the agency added that delays in securing additional resources for fiscal years 2016, 2017, and beyond will have a direct impact on its DATA Act implementation and schedule. Similarly, another agency pointed out that having insufficient funds for contractor support, managing the overall implementation, testing interfaces between systems, and addressing data mapping issues will pose a challenge for its components and systems. Some agencies also reported that human resources are key to successful DATA Act implementation. One agency reported that it is concerned about the adequacy of its human resources, which could impair its ability to comply with changes or additional DATA Act requirements. In addition, the agency added that this may prevent it from being able to address any deficiencies in its data and operations. Specifically, the agency reported that resources are required for project management, data analysis, data management, and training for financial inquiry and analysis. The need for subject matter experts, such as data architects, was raised as a challenge by another agency. Furthermore, one agency noted that the need to share limited resources for DATA Act implementation with other operational activities presents a significant challenge for its implementation strategy. Guidance. In their implementation plans, 19 agencies reported the lack of adequate guidance as a challenge to implementing the DATA Act. Several agencies noted that they cannot fully determine how their policies, business processes, and systems should be modified to support DATA Act reporting because, in their view, OMB and Treasury have not yet issued complete, detailed, finalized DATA Act implementation guidance on required data elements, the technical schema, and other key policies. According to these agencies, issuance of such guidance is part of the critical path to meeting their implementation goals. For example, one agency noted that its implementation plan greatly depends on Treasury developing the technical schema for DATA Act implementation. The agency also reported that any delays or changes to Treasury requirements in the technical schema will significantly affect the agency’s solution design, development and testing schedule, and cost estimate. Another agency included a list of unanswered questions in its implementation plan that it wanted OMB to address in its guidance related to the time frames, various technical requirements, level of reporting, linking systems, and tracking and reconciling data. Dependencies. Eighteen agencies reported in their implementation plans that the completion of certain implementation activities is subject to actions or issues that must be addressed by OMB and Treasury in order for the agencies to effectively implement the DATA Act. Some agencies also noted that they were relying on their shared service providers’ implementation of the DATA Act for agency compliance with the act. For example, one agency noted that it will rely on its shared service provider to enhance its system, but funding may be restricted to enhance a system that the agency does not own. Another key dependency noted in one agency’s implementation plan is the need for Treasury to provide detailed information or requirements regarding the data formats, validation module, error correction and resubmission process, and testing schedule. Without this information, the agency noted that it cannot provide complete cost estimates, determine changes to system and business processes, and determine the level of effort and resources required to develop the data submissions. Time frames. In their implementation plans, 16 agencies reported time constraints as a challenge in implementing the DATA Act. For example, one agency noted that the time frame to get everything done indicated in the original guidance coupled with the complexity of the known issues makes it highly unlikely that its DATA Act initiative will stay on target. The agency also noted that there is no mitigation strategy for meeting the expected deadline on all aspects of the reporting because even if all tasks were worked concurrently, the schedule is not attainable for the agency. Another agency noted that its current reporting of award and awardee information to USAspending.gov is in accordance with the Federal Funding Accountability and Transparency Act of 2006. This information is reported 3 days after the award is made for contracts and bimonthly for financial assistance, while the DATA Act requires reporting of account- level information monthly where practicable but not less than quarterly. This agency noted that linking financial information with nonfinancial information that is reported with a different frequency creates a “moving target” and poses a challenge to linking the financial and nonfinancial data. Other challenges. Agencies reported several other challenges in their implementation plans less frequently than the ones listed above. For example, a few agencies reported challenges related to the overall policies, procedures, and processes, such as governance, risk management, and training. Some agencies also noted challenges related to the level of detail required in DATA Act information differing from existing financial reporting processes, including the ability to reconcile information and data to sources and official records. Finally, agencies reported concerns about the quality and integrity of data in underlying agency systems and its effect on DATA Act reporting. Leveraging existing resources. To effectively use limited resources, some agencies noted in their implementation plans the importance of leveraging available systems and human resources by reassigning staff, using subject matter experts, and multitasking when possible to maximize efficiency. For example, one agency reported that it will leverage senior executive support to make the DATA Act implementation a priority and see what resources might be available in the “least expected places,” as well as work on tasks concurrently. In addition, agencies reported the need to update systems to encompass more data elements and streamline reporting. For example, one agency reported that it plans to designate a Chief Data Officer to oversee a multitiered review of agency data and implement solutions for consolidating agency data. Communication and information sharing. In their implementation plans, some agencies reported the need for frequent communication with OMB, Treasury, shared service providers, vendors, and other agencies in order to keep one another updated on their implementation activities, as well as to share best practices and lessons learned throughout the process. Agencies also suggested that reviewing other agencies’ implementation plans for best practices, common challenges, and solutions would facilitate information sharing. For example, one agency pointed out that in its view lines of communication between Treasury and the agencies must be transparent to help ensure that the submission of financial data is accurate and the process for submitting them runs smoothly. Another agency noted that it believes that collaboration with other agencies to share common concerns will be beneficial. Process and policy review/adaptation. In order to implement the DATA Act, agencies also plan to review and adapt their current processes and policies in order to incorporate the act’s requirements. For example, one agency noted in its plan that it will develop a continuous process to analyze, plan, track, and control potential risks throughout DATA Act implementation. Another agency noted that it plans to align the implementation schedules for its system upgrades as closely as possible. In its plan, the agency pointed out that performing testing, independent validation and verification, and other tasks at the same time for both projects will save time. The agency’s plan also noted that this strategy will minimize the burden on both agency and contractor personnel while keeping them on track to meet the required completion dates. Furthermore, agencies reported that they will conduct reviews of their business processes and procedures to find gaps and help ensure that submitted data are complete and accurate. For example, an agency noted in its plan that it will conduct reconciliations to review the data processed and ensure that the submitted data match the supporting documentation. Utilizing external resources. Agencies noted plans to use external resources in implementing the DATA Act. For example, several agencies’ plans noted that they plan to work closely with their shared service providers throughout the implementation process. One agency also noted that it will hire a contractor to assess cost and risk management procedures to determine their alignment with leading practices. Another agency noted that it intends to leverage access to external working groups for addressing concerns as well as decision making. Finally, one agency discussed the need to replace its current financial system in order to comply with DATA Act requirements. Monitoring and developing guidance. In their implementation plans, agencies also discussed plans to closely monitor DATA Act implementation guidance in order to adapt agency implementation strategies as the guidance changes. For example, one agency noted that it will monitor and evaluate the release of DATA Act guidance as well as data elements and the technical schema in order to identify the effect on the project. Another agency noted that it plans to use its established governance structure to immediately facilitate solutions when additional guidance is provided. Further, some agencies discussed developing guidance and training materials for internal use. For example, one agency stated that it plans to create a common set of tools by establishing a “project management toolkit” for agency leaders to ensure that DATA Act implementation needs are addressed efficiently and effectively. Technical solutions. Some agencies plan to utilize various technical solutions as part of their DATA Act implementation plans. For example, one agency noted in its plan that it will leverage existing technologies and processes available to extract, transform, load, and build on established and successful mappings and minimize cost and schedule impacts. Another agency’s plan noted that it may use an interface file that contains both the award ID and the document number used in its financial system to crosswalk between the financial detail and the award ID in its systems. One agency also noted that it will implement business intelligence and analytics tools, including the development of automated, multiple data reconciliations where feasible. Furthermore, another agency noted that it will engage system vendors to make system changes and thereby reduce the need for future custom development. According to the agency, this strategy will help manage its initial implementation costs. In addition to the contact named above, Michael LaForge (Assistant Director); Carroll Warfield, Jr. (analyst-in-charge); Fred Evans; Thomas Hackney; Charles Jones; Diane Morris; and Laura Pacheco made major contributions to this report. Other key contributors include Peter Del Toro, Kathleen Drennan, Doreen Eng, Patrick Frey, Jason Kelly, Jason Kirwan, Leticia Pena, Carl Ramirez, Michelle Sager, Andrew J. Stephens, and James Sweetman, Jr. Additional members of GAO’s Internal Working Group on the Digital Accountability and Transparency Act of 2014 also contributed to the development of this report. DATA Act: Section 5 Pilot Design Issues Need to Be Addressed to Meet Goal of Reducing Recipient Reporting Burden. GAO-16-438. Washington, D.C.: April 19, 2016. DATA Act: Progress Made but Significant Challenges Must Be Addressed to Ensure Full and Effective Implementation. GAO-16-556T. Washington, D.C.: April 19, 2016. DATA Act: Data Standards Established, but More Complete and Timely Guidance Is Needed to Ensure Effective Implementation. GAO-16-261. Washington, D.C.: January 29, 2016. Federal Spending Accountability: Preserving Capabilities of Recovery Operations Center Could Help Sustain Oversight of Federal Expenditures. GAO-15-814. Washington, D.C.: September 14, 2015. DATA Act: Progress Made in Initial Implementation but Challenges Must be Addressed as Efforts Proceed. GAO-15-752T. Washington, D.C.: July 29, 2015. Federal Data Transparency: Effective Implementation of the DATA Act Would Help Address Government-wide Management Challenges and Improve Oversight. GAO-15-241T. Washington, D.C.: December 3, 2014. Government Efficiency and Effectiveness: Inconsistent Definitions and Information Limit the Usefulness of Federal Program Inventories. GAO-15-83. Washington, D.C.: October 31, 2014. Data Transparency: Oversight Needed to Address Underreporting and Inconsistencies on Federal Award Website. GAO-14-476. Washington, D.C.: June 30, 2014. Federal Data Transparency: Opportunities Remain to Incorporate Lessons Learned as Availability of Spending Data Increases. GAO-13-758. Washington, D.C.: September 12, 2013. Government Transparency: Efforts to Improve Information on Federal Spending. GAO-12-913T. Washington, D.C.: July 18, 2012. Electronic Government: Implementation of the Federal Funding Accountability and Transparency Act of 2006. GAO-10-365. Washington, D.C.: March 12, 2010. Federal Contracting: Observations on the Government’s Contracting Data Systems. GAO-09-1032T. Washington, D.C.: September 29, 2009.
The federal government annually spends over $3.7 trillion on its programs and operations. To help increase the transparency of online spending information, the DATA Act requires agencies to begin reporting spending data by May 2017, using new data standards established by OMB and Treasury. In May 2015, OMB directed federal agencies to submit DATA Act implementation plans by September 2015. OMB and Treasury subsequently issued guidance to agencies to help them develop plans. This report is part of a series of products that GAO will provide to Congress in response to a statutory provision to review DATA Act implementation. This report discusses OMB's and Treasury's efforts to facilitate implementation of the DATA Act and the consistency of agency implementation plans with OMB and Treasury guidance, among other things. GAO evaluated OMB's and Treasury's processes against project management and internal control criteria, assessed selected agency implementation plans against OMB and Treasury guidance, and interviewed staff and officials at OMB and Treasury. The Office of Management and Budget (OMB) and the Department of the Treasury (Treasury) have not designed and implemented controls or fully documented processes related to the review and use of agency implementation plans for the Digital Accountability and Transparency Act of 2014 (DATA Act). These controls and processes are to be used for reviewing agencies' implementation plans and monitoring agencies' progress against these plans. In addition, as of July 2016, OMB had not determined the complete population of agencies that are required to report spending data under the DATA Act and submit implementation plans to OMB. OMB staff stated that their purpose for directing agencies to submit implementation plans was to use the implementation cost estimates to assist them in formulating the fiscal year 2017 budget, while Treasury officials stated that the purpose of their review of the plans was to facilitate discussions with the agencies. In addition, OMB and Treasury staff initially informed GAO that they were not going to request that agencies submit updated implementation plans that considered new technical requirements and guidance that was released on April 29, 2016. However, on June 15, 2016, OMB requested updated implementation plans by August 12, 2016, but only from Chief Financial Officers (CFO) Act agencies. Lacking fully documented controls and processes as well as a complete population of agencies that are required to report under the DATA Act increases the risk that the purposes and benefits of the DATA Act may not be fully achieved, and could result in incomplete spending data being reported. Further, without updated implementation plans, including revised timelines and milestones, cost estimates, and risks that reflect the impacts of new technical requirements and guidance, from all agencies that are required to report under the DATA Act, OMB and Treasury may not have the information needed to assist them in properly monitoring resource needs and agencies' progress in implementing new requirements government-wide. Based on OMB and Treasury guidance, GAO identified 51 plan elements in four separate categories—timeline, cost estimate, narrative, and project plan—to be included in agency implementation plans. None of the 42 implementation plans GAO received and reviewed contained all 51 plan elements described in OMB and Treasury guidance. For example, many agencies' cost estimates did not provide all the elements for cost estimates, including total work years and a list of assumptions, or did not differentiate between their business process costs and technology costs. GAO recommends that OMB, in collaboration with Treasury, determine the population of agencies required to report under the DATA Act, establish fully documented controls and processes to help ensure agencies' effective implementation of the DATA Act, and request updated plans from non-CFO Act agencies. OMB generally concurred with the recommendations and Treasury deferred to OMB.
Biobased products are industrial and consumer goods composed wholly, or in significant part, of biological products, renewable domestic agricultural materials (including plant, animal, and marine materials), or forestry materials. These biological products and agricultural and forestry materials are generally referred to as biomass. Corn, soybeans, vegetable (plant) oils, and wood are the primary sources used to create biobased products. In some cases, these biobased sources are combined with other materials such as petrochemicals or minerals to manufacture the final product. For example, soybean oil is blended with other components to produce paints, toiletries, solvents, inks, and pharmaceuticals. However, some biobased products, such as corn starch adhesives, are derived entirely from the plant feedstock. Table 1 provides further information on biobased products made from plant-based resources. Appendix II lists sources for additional information on these and other biobased products. The many derivatives of corn illustrate the diversity of products that can be obtained from a single plant-based resource. As well as an important source of food and feed, corn serves as a source for ethanol and sorbitol, industrial starches and sweetners, citric and lactic acid, and many other products. Figure 1 shows the many uses of corn, including its industrial uses. Biomass resources are naturally abundant and renewable, unlike fossil resources. According to the DOE, in the continental United States, about 500 to 600 million tons of plant matter can be grown and harvested annually in addition to our food and feed needs. These abundant resources can be used in the growing biobased products industry to help meet the nation’s demand for energy and products while reducing its dependence on imported oil. In addition, supplementing petroleum resources with biomass can provide other important benefits such as growth in rural economies and lower emissions of greenhouse gases and pollutants. According to DOE, the impacts of the growing biobased products industry on rural economies have yet to be quantified, but these impacts could be very positive. Expanding this industry will require an increase in production and processing of biomass that could provide a boost to rural areas. For example, expansion could create new cash crops for farmers and foresters, many of whom currently face economic hardship. In essence, this growth could move the agricultural and forestry sectors beyond their traditional roles of providing food, feed, and fiber to providing feedstock for the production of fuels, power, and industrial products—making these sectors an integral part of the transportation and industrial supply chain. In addition, development of a larger biobased products industry would require new processing, distribution, and service industries. In general, these industries would likely need to be located in rural communities close to the feedstock and could potentially result in positive impacts on rural communities through increased investment, income, taxes, and employment opportunities. Regarding environmental benefits, biomass is carbon-fixing, and represents a way to produce fuels, power, and products without contributing to global warming, according to DOE. Although some fossil resource inputs may be needed for the production of biomass and biobased products—such as fuel to run farm equipment, petrochemical fertilizers and pesticides to produce the biomass, and the energy needed to manufacture the biobased products made from this biomass—biomass removes carbon dioxide, a significant greenhouse gas, from the atmosphere through photosynthesis. The carbon component is then fixed, or bound up, in the biomass and stays in the biobased product made from this biomass for a relatively long period of time before it is released through biological decay. According to DOE, when petroleum is used as the feedstock to manufacture many products, such as plastics, up to 25 percent of the carbon in the petroleum is lost to the atmosphere during production. However, producing these products directly from biomass reduces the carbon released during production and increases carbon- fixing plant matter. In addition, as a renewable resource, biomass represents a way to recycle carbon in the environment; in contrast, the use of fossil resources results in a net release of carbon to the environment. Finally, many biobased products are readily biodegradable, meaning they can be safely placed into a landfill, composted, or recycled and do not emit hazardous volatile organic compounds or toxic air pollutants. According to DOE, the potential for biobased products to move into entirely new and nonconventional markets is substantial. New biobased products with improved economic and/or environmental performance could make significant inroads in markets historically dominated by other materials. For example, according to the Biobased Manufacturers Association, about 300 companies are now producing nearly 800 biobased products to replace other materials. These companies include a number of major corporations or their subsidiaries. In addition, increasing environmental consciousness has created “green consumerism”—a segment of consumers who are willing to pay more for products that are less harmful to the environment. Currently, many of those “green” products are biobased, such as corn-based plastic ware, soy-based engine lubricants, and citrus-based household cleaners. Perhaps the greatest factor driving the growth of biobased products will be their acceptance by the public, business enterprises, and government as a solution to some of the nation’s most pressing resource problems. However, according to USDA, it often takes 15 to 20 years for a new material to be accepted and adopted by industry; and consumers, businesses, and government procurement officials are often reluctant to switch from familiar products to new ones. Thus, to make significant inroads, biobased products will need to be environmentally sound and competitive with traditional products in both performance and cost. The increased use of these products will also require favorable government policies, such as continued support for biobased research and development and affirmative procurement programs that emphasize biobased purchases for government needs. In addition, their increased use will depend on the nation’s continued desire to reduce its dependence on imported oil and further technology improvements that will lead to new applications and more efficient production of biobased products. In the last 10 years, the federal government has taken steps to promote the use of biobased products. For example, the President issued an executive order in 1998, replacing a similar executive order issued in 1993, to encourage federal agencies to buy products that are environmentally preferable and/or biobased. A subsequent executive order was issued in 1999 with the aim of tripling the nation’s use of biobased fuels and products by 2010. Regarding legislation, the Biomass Research and Development Act of 2000 directs DOE and USDA to closely coordinate their research and development efforts on new technologies for the use of biomass in the production of biobased industrial products. The 2002 farm bill reauthorized the biomass act, continued funding for biomass research and development programs, and set forth federal agency purchasing requirements for biobased products. The legislative history of the farm bill states that Congress enacted the biobased provisions to energize new markets for these products and to stimulate their production. With respect to promoting federal purchases of biobased products, the 1998 executive order required USDA to issue a Biobased Products List by March 1999. Once the list was published, federal agencies were encouraged to modify their procurement programs to give consideration to biobased products. USDA published a notice in the Federal Register on August 13, 1999, to solicit public comments on a process for considering items for inclusion on this list and on criteria for identifying these items. As we reported in June 2001, USDA expected to complete this list by fiscal year 2002—3 years later than the executive order required. However, USDA did not complete the list because the 2002 farm bill set out new biobased purchasing requirements for USDA to implement. In the meantime, although the Federal Acquisition Regulation was amended to implement the executive order, federal agencies generally were waiting for USDA to publish a list before making any final decisions or modifications to their procurement programs. Whereas the executive order encouraged, but did not require, federal agencies to purchase biobased products, the farm bill generally requires that agencies give preference to these products. While USDA was faced with an ambitious task, its actions and consequently those of other agencies to implement the farm bill requirements for purchasing biobased products have been limited. USDA issued proposed guidelines in the Federal Register on December 19, 2003, more than a year later than the farm bill requirement for final guidelines. These guidelines take only limited steps toward meeting the requirements of the farm bill. While the guidelines recommend some procurement practices and practices for vendor certification, they do not identify items designated for preferred procurement or provide information on their availability, relative price, performance, and environmental and public health benefits. Although USDA hopes to have some items designated before the end of calendar year 2004, the process for designating other items discussed in the preamble to the proposed guidelines will take years, possibly until 2010. In addition, as new biobased products are developed and enter the market, these items will also need to be designated. Regarding other biobased-related requirements of the farm bill, USDA has not yet developed a labeling or recognition program or completed its work on preferred procurement practices known as the model procurement program to guide both its own biobased purchases and those of other agencies. In the meantime, as the top four procuring agencies await USDA’s fulfillment of these requirements—particularly the designation of items for preferred procurement—they have taken only limited steps to procure biobased products. For example, some agencies are purchasing biobased cleaners, lubricants, deicers, and/or dining ware because these products are readily biodegradable and composted. USDA’s proposed guidelines only partially meet the requirements of the farm bill. While the guidelines recommend some procurement practices and practices for vendor certification, they do not identify items designated for preferred procurement or provide information on their availability, relative price, performance, and environmental and public health benefits. However, in the preamble to the guidelines, USDA discusses possible items for future designation. In the preamble, USDA has grouped these items by category, with each category consisting of one or more items and each item consisting of one or more branded biobased products. For example, “Lubricants and Functional Fluids” is one suggested category, hydraulic fluids is an item within that category subject to designation, and “ABC Hydraulic Fluid” made by the ABC company is a branded biobased product related to that item. At present, the preamble discusses 11 categories of items and suggests a minimum biobased content for the items in these categories. Appendix III provides a complete list of these categories and the items listed under each, as well as additional information on provisions of the proposed guidelines. However, the proposed guidelines do not designate any items for preferred procurement given that USDA has not yet considered the availability of these items or the economic or technological feasibility, including life-cycle costs, of these items as required by the farm bill. Under the proposed rule, once an item is designated, manufacturers will be able to certify that their biobased products meet the characteristics of a designated item. USDA has established a biobased information Web site for this purpose. USDA anticipates that federal procuring agencies will use this Web site to obtain current information on designated items, contact information on manufacturers and vendors, and access to information on product characteristics relevant to procurement decisions. In addition, USDA anticipates that as the biobased product industry develops, new items and associated products will enter the market. Thus, new items will be designated, as necessary. In addition, USDA has only minimally provided information on recommended procurement practices pending completion of its model procurement program. For example, the proposed guidelines discuss the tests that should be used to establish the content, performance characteristics, and/or life-cycle costs of a product, including the standards or specifications applicable. However, USDA officials said the model procurement program, when complete, will contain considerably more guidance on recommended procurement practices. USDA expects to issue the final version of these proposed guidelines by April 2004, but it does not expect to have adequate information for designating more than a few items before the end of calendar year 2004. USDA estimates that it will complete the overall blueprint for a comprehensive, model procurement program by September 2004, and will have many of the specific components of the program under development or tested and implemented by that time. The process for designating items will be time consuming. For example, to designate the items discussed in the preamble to USDA’s proposed guidelines, USDA will likely initiate a number of rulemakings over a period of years. According to the timeline provided by New Uses staff, this process will likely not be completed until early 2010. USDA officials noted that these rulemakings may not correspond to the 11 product categories discussed in the preamble; the agency’s ability to move forward with designating individual items will depend on the availability of information needed for this purpose. As a result, a given rulemaking may address items that span two or more categories. For each rulemaking, a proposed rule would be developed and published first, followed by a 30- or 60-day comment period, the time needed to consider these comments, and then publication of the final rule. USDA must also complete its work on its recommended procurement practices (the model procurement program), the voluntary recognition program, and the voluntary labeling program. According to USDA officials, the model procurement program serves two purposes. First, it will constitute USDA’s biobased procurement program. All federal agencies, including USDA, are required to develop such a program. Second, the model program will serve as a guide to other agencies in developing their own preferred procurement programs. USDA officials explained that this will fulfill the farm bill requirement placed on USDA to recommend procurement practices. USDA plans to incorporate the voluntary recognition program into its model procurement program. In addition, once the model procurement program is complete, USDA plans to seek a change to the Federal Acquisition Regulation to reflect these procurement practices. Changes to this regulation also require a rulemaking. Finally, USDA plans to address requirements for the labeling program in a future rulemaking. Considering the amount of work that remains to be done to fulfill the farm bill requirements, it seems likely that USDA’s fulfillment of these requirements will take years, particularly for the designation of items for preferred procurement that were discussed in the preamble of USDA’s proposed rule. Thus, although the farm bill required USDA to promulgate guidelines, including the designation of items for procurement, within 180 days of the legislation’s enactment—by November 2002—it is not likely that the designation of all of the items discussed in the preamble to the proposed guidelines will be completed until the spring of 2010, according to USDA estimates. However, the agency hopes to have at least some of these items designated by the end of calendar year 2004. In addition, as the farm bill recognizes by allowing USDA to revise its guidelines from time to time, the process of designating items is a continual one as new biobased items will continue to enter the market. Appendix IV provides a timeline showing the chronology of steps USDA plans to fulfill the farm bill requirements for the federal procurement of biobased products. Without final USDA guidelines designating items for preferred procurement, the top four procuring agencies generally are reluctant to undertake an agencywide biobased procurement program. Officials from these agencies indicated that until they clearly understand whether a product meets USDA’s definition of a biobased product, it would not be advantageous to establish a purchasing program agencywide. However, even though these agencies have not implemented their own biobased procurement programs, we found that some of them have procured limited quantities of biobased products. For example: The Defense Logistics Agency (DLA)—the supplier for DOD and several civilian agencies—has procured and is now testing such biobased products as food service cutlery for service personnel overseas and hydraulic fluid for military helicopters. According to DLA officials, these products are appealing—assuming they meet necessary performance specifications—because they are readily biodegradable, which may make them easier to dispose of. These officials indicated that they are working closely with USDA to ensure that the products tested will ultimately be products that will meet USDA’s criteria for biobased products. However, these officials stated that their agency could test more products if USDA would publish guidance designating biobased products for purchase. Figure 2 shows wheat starch-based plastic cutlery that DLA is testing for field use. The Department of the Interior (Interior) purchases biobased products directly from manufacturers and has requested that their contractors use biobased products in some services. In an effort to promote the use of biobased products in national parks, the National Park Service Facilities Management Division has covered the incremental costs for park purchases of biobased products over the use of traditional products; in 2003, they provided $42,000 towards this promotion. For example, a wildlife reserve located in Alaska purchased a biobased deicer, made from corn and other agricultural products, to clear roads and sidewalks. Unlike deicers that rely on salt or petrochemicals, biobased deicers can be formulated to have less impact on surface waters and vegetation. Several national parks also are buying biobased fuels and additives for their snowmobiles because they produce less toxic emissions. In addition, biobased hydraulic oils are being used in construction equipment at many park sites because spills of these lubricants pose less environmental risk and are less costly to clean up. Furthermore, the cafeteria-service contractor in Interior’s headquarters building in Washington, D.C. uses biobased plates and bowls, made primarily of potato starch and limestone. A pilot project undertaken with USDA’s Beltsville Agricultural Research Center demonstrated the ability to compost the plates and bowls along with cafeteria food waste. Figure 3 shows the application of a biobased deicer by an Interior employee. Figure 4 shows other biobased products used by Interior. In addition to its research activities to develop new uses of agricultural commodities for producing biobased products, USDA’s Agricultural Research Service is taking steps to use biobased products as well. For example, the agency’s Beltsville Agricultural Research Center in Maryland (the Center) spent about $8,500 in fiscal year 2003 for biobased products— primarily cleaners, hydraulic fluids, and lubricants used in its farm machinery. In addition, the Center uses biobased fuels, such as soy-based biodiesel, in this type of machinery. In fiscal year 2003, the Center purchased about $523,000 in biobased fuels. Center officials noted that the clean-up of accidental spills of biobased hydraulic fluids and lubricants is far less expensive than the petrochemical alternatives because the biobased products are readily biodegradable. These officials also expressed their belief that maintenance costs for equipment using these products has dropped, compared with the costs associated with using petroleum-based alternatives, although they noted that they have not thoroughly studied and documented this anecdotal observation. According to these officials, the Center hopes to increase biobased purchases by 70 percent in fiscal year 2004. In addition to the Center’s direct purchases of biobased products, some of its service contractors use biobased products when performing work at Beltsville. Center officials were unable to tell us how much their contractors spend on biobased products. Figure 5 shows some of the biobased products used at the Center. Figure 6 shows Center farm equipment in which biobased lubricants and fuels are used. USDA could more effectively marshal its resources to fulfill the farm bill biobased procurement requirements in a timely manner with a written, comprehensive management plan. Such a plan would define tasks and set milestones, identify available resources and expected outcomes, and describe how the department will coordinate its efforts to implement the plan. USDA did not have such a plan to guide its preparation of the proposed guidelines issued in December, and we believe that this lack of a plan may have contributed to delays in completing this segment of the work. Furthermore, except for the development of the model procurement program and voluntary recognition program, the agency does not have a comprehensive plan to guide its work to fulfill the farm bill’s other biobased requirements. Finally, USDA’s implementation of the biobased provisions could be accelerated if the department assigned more staff and financial resources to this work and gave it a higher priority. USDA assigned primary responsibility for implementing the farm bill biobased procurement provisions to its Office of Energy Policy and New Uses (New Uses office), located within the Office of the Chief Economist. The conference report for the farm bill encouraged USDA to carry out these provisions under the aegis of the New Uses office. Among other things, this office is responsible for developing the procurement guidelines, including designating items for procurement, recommending practices for procurement and for certification by vendors of the percentage of biobased content in their products, and providing information on the availability, relative price, performance, and environmental and public health benefits of the items designated. The New Uses office also is primarily responsible for establishing the voluntary labeling program. In addition, USDA charged its Office of Procurement and Property Management (Procurement office) with developing the model procurement program and the voluntary recognition program. When we asked New Uses officials in May 2003—a year after farm bill enactment and 6 months after the legislation deadline for USDA’s completion of the biobased procurement guidelines—for their written management plan to implement the farm bill requirements, they indicated that they did not have a plan. At our request for the agency’s timeline for complying with these requirements, these officials indicated that they did not have a timeline either, but offered to create one, which they provided to us several weeks later in June 2003. While the timeline is a start, it falls short of being a comprehensive plan in a number of respects. First, the timeline provides for delays in meeting milestones, stating “this is an optimistic schedule; various delays could push this date back as much as 6 months or more, which would similarly push back all following milestones.” Indeed, there have been delays. For example, the timeline states that the proposed guidelines will be published in the Federal Register on October 1, 2003, but they were not published until December 19, 2003. According to USDA officials, additional delays, not anticipated in the timeline, could postpone some of the expected completion dates by as much as a year. These officials noted that these delays may result from the difficulty of working through the various concerns and conflicting views of the many stakeholders to this effort, a process that one New Uses official said was akin to “swimming in molasses.” A comprehensive plan would discuss possible sources of delay and how they might be mitigated. Second, New Uses staff developed the timeline without consulting with the USDA office responsible for developing the model procurement program and the voluntary recognition program—the Procurement office. When we met with officials from the Procurement office in September 2003, they said that they had not seen the timeline we received from the New Uses office in June 2003. When we showed these officials the timeline, they indicated disagreement with some of the dates related to their portion of the work. A comprehensive plan would discuss how the work should be coordinated among interested offices to avoid these types of misunderstandings. Third, the timeline does not describe how coordination will be done with other interested agencies. The farm bill requires that USDA consult with EPA, GSA, and NIST before developing the procurement guidelines. The legislation also requires USDA to consult with EPA in establishing the voluntary labeling program. As a practical matter, it would also be important for USDA to coordinate with the top four procuring agencies— DOD, DOE, NASA, and GSA—-as well as other agencies such as the Office of the Federal Environmental Executive. During our work, we contacted relevant officials representing these agencies; most expressed concern about what they considered to be a lack of timely and effective coordination on USDA’s part, although officials from some of the agencies seemed generally satisfied. Some of those who expressed concerns about coordination noted that USDA had been more attentive, relatively speaking, to interagency consultation in its earlier efforts to develop a list of biobased products for procurement under the 1998 executive order. In addition, a senior official of the Office of the Federal Environmental Executive said that USDA has not effectively coordinated with EPA and DOE officials responsible for programs that promote government purchases of environmentally friendly, recycled content, or energy efficient products. Specifically, this official noted that USDA does not have a clear understanding of how its biobased guidelines will impact regulations related to these other programs. In addition, this official opined that USDA is missing the opportunity to incorporate the lessons learned from the development of these other programs. In light of these concerns, during our work we asked the New Uses staff for minutes or other written documentation of coordination meetings. These staff indicated that they had not documented internal or external coordination meetings in writing. A comprehensive plan would identify agencies with which coordination should occur, describe the frequency and manner of these contacts, and indicate how the results of these meetings would be documented. Fourth, the timeline does not describe how progress reporting will be done, what form these reports will take, or to whom these reports will be made. New Uses officials told us that although they do not prepare regular progress reports, they do discuss the status of their work on the farm bill biobased provisions at weekly staff meetings with the Chief Economist and that this official periodically briefs the Secretary of Agriculture. In addition, these officials indicated that the status of their work is reported weekly to USDA’s farm bill implementation team and that this team also reports to agency’s subcabinet officers. However, without a comprehensive management plan, including clearly delineated tasks and associated milestones, we believe it would be difficult for managers to put into context the relative progress being made on this work, to identify needed adjustments, and to hold accountable the officials responsible for its completion. A comprehensive plan would describe who the officials responsible for implementing the farm bill requirements would report to and the frequency and manner of periodic progress reports. In contrast to the New Uses office’s lack of a management plan, the Procurement office prepared a detailed written management plan for conducting its portion of the work. This document contains the elements of a comprehensive plan, including identifying the work to be done, the associated tasks and milestones, available resources, anticipated costs, and the type and frequency of progress reporting. The plan also discusses the need for coordination with other USDA offices and federal agencies and how this coordination will be accomplished. Unfortunately, however, this plan applies only to limited aspects of the work USDA must complete to fulfill the farm bill requirements. The New Uses office is responsible for the majority of the work needed to fulfill these requirements; yet, as discussed, it lacks a comprehensive plan for completing this work. We met with USDA officials, including New Uses staff, in February 2004 to discuss further the lack of a comprehensive management plan and other issues identified in our work and their significance. At that meeting, the New Uses staff provided us a document entitled, “Implementing Section 9002 of the Farm Bill.” This document was attached to an e-mail dated June 2002 that referred to the attachment as an “early draft implementation plan for Section 9002.” New Uses staff indicated that this document was evidence of their planning. However, our analysis of this document reveals that it is not a comprehensive management plan for implementing the farm bill requirements. First, the e-mail refers to the document as an early draft; apparently it never advanced beyond this stage. Second, the document lacks most elements of a comprehensive plan, such as a description of specific tasks, associated milestones, and the frequency, manner, and documentation of coordination meetings and periodic progress reporting. Instead, the document generally restates the farm bill requirements and the related conference report language, discusses some options for addressing these requirements, and presents a rationale for hiring a contractor with the requisite skills to implement the farm bill provisions under the management oversight of the New Uses office. Interestingly, although a contractor was not hired, the document notes that, “Contractor performance would be evaluated on an annual basis against pre-agreed-upon achievement milestones, with an opportunity to re-direct resources if necessary.” Thus, although the New Uses office apparently planned to use a list of specific tasks and associated milestones to judge the contractor’s progress and hold this firm accountable, the New Uses staff, who had to undertake this work without contractor assistance, did not develop a similar list of tasks and milestones to guide their work. As discussed, New Uses staff did not develop a list of milestones until the spring of 2003, and only at our request. Furthermore, at our February 2004 meeting, USDA officials expressed the view that although they had missed the farm bill biobased-related deadlines and most farm bill biobased procurement requirements remain unfulfilled, they had made noteworthy progress in publishing the proposed guidelines in December 2003. These officials discussed and subsequently provided us with a document listing work activities they had undertaken leading up to the publication of these guidelines. Among other things, the list notes that during the summer and fall of 2002, USDA developed the aforementioned “implementation plan,” held various internal meetings and external consultations, and began drafting the guidelines. Thereafter and throughout calendar year 2003, the list primarily shows that USDA went through several rounds of vetting and revising the guidelines, based on reviews done by the OMB and USDA’s Office of General Counsel. In addition, USDA officials noted that throughout this process their collective thinking evolved as to the form and content of the guidelines and included considerations such as (1) whether the list of biobased products that was being developed by the agency under the 1998 executive order had relevance in light of farm bill criteria for designating items and (2) whether a more simplified, less-burdensome approach regarding the content of the guidelines would still satisfy the legislation’s requirements. Finally, New Uses officials stated that the notice of proposed rulemaking containing the proposed guidelines was developed far more quickly—by a measure of years—than the rulemakings for two other programs that they view as relevant: the preferred procurement program for recycled products developed by EPA and the organic product labeling program developed by USDA. In citing the lack of a management plan, we are not questioning whether New Uses staff have worked hard or whether the complexity and novelty of the issues they faced were challenging. Rather we are raising the question of whether the efficiency of this work has suffered because of a lack of a comprehensive plan to guide it. Clearly, the other USDA office involved in implementing the farm bill biobased requirements thought it was important to develop a thorough management plan to guide its portion of the work to ensure the efficient use of available resources and timely completion of the work. Furthermore, we are unable to comment on the relevance of comparing the development of various rulemakings cited by New Uses staff because such an analysis is outside the scope of our work. However, we believe there are probably lessons to be learned from EPA’s experience in developing the procurement program for recycled products that would benefit USDA’s efforts to develop a similar program for biobased products. Careful planning for a major initiative is a recognized good business practice. Furthermore, the need for adequate planning in federal programs is established in legislation such as the Government Performance and Results Act of 1993, Presidential executive orders, circulars of OMB, and agency regulations to ensure that federal program managers know what they want to accomplish, how they are going to accomplish it, and when it will be accomplished. Without a comprehensive plan for implementing the farm bill requirements assigned to the New Uses office, including clearly defined tasks and milestones, it is difficult for USDA to set priorities, use resources efficiently, measure progress, and provide agency management a means to monitor this progress. Furthermore, the lack of a plan only serves to delay the agency’s completion of legislatively required actions. USDA did not allocate the staff needed to expedite the biobased procurement effort. It assigned responsibility for this effort to two staff in the New Uses office who also had other responsibilities—in effect, they worked part-time on biobased procurement. While these New Uses officials had assistance from time-to-time from staff in other USDA offices, including staff who had been involved in the agency’s earlier efforts under the executive order, the availability of these staff was more ad hoc, subject to the demands of other work to which they were assigned. In addition, according to these New Uses officials, no one in their office had experience in writing rules; and they had to wait several months before staff from another office with this experience could be assigned to help write the notice of proposed rulemaking containing the guidelines for publication in the Federal Register. However, New Uses officials said that while they were waiting for this assistance, they were able to continue with other aspects of the work. Nevertheless, although these New Uses officials stated that they do not believe that the guidelines could have been issued in any case by the farm bill deadline, they believe that the lack of adequate personnel assigned specifically to this effort was a source of delay. Regarding funding, the farm bill did not specifically authorize any funds for developing the biobased procurement guidelines, and USDA did not provide any funds to the New Uses office for this effort from other programs. In essence, the New Uses office had to absorb these costs from its operating budget; and as a result, this office assigned only two staff to work part-time on meeting the farm bill requirements, as discussed. The New Uses office began its work soon after passage of the farm bill. However, the farm bill authorized $1 million annually for testing biobased products. To date, the New Uses office has used these funds to contract with Iowa State University and NIST to develop testing protocols for biobased products and an information Web site on biobased products. Regarding development of a model procurement program and the voluntary recognition program, the Procurement office did not begin this work until the fall of 2003 because of a lack of identified funding for this purpose until that time. Specifically, in September 2003, USDA’s Rural Development Mission Area transferred about $500,000 to the Procurement office for this purpose. In addition, the Procurement office added about $25,000 of its own funds to this sum. This office used these funds to contract with the DOE’s Oak Ridge National Laboratory and a consulting firm to, among other things, assist in developing the office’s comprehensive plan for implementing this portion of the work. Oak Ridge also will be involved in the plan’s implementation under the Procurement office’s direction. In addition, USDA transferred a staff member from its Office of Small and Disadvantaged Business Utilization to the Procurement office to oversee this effort. While Procurement office staff indicated that the funds identified to date should carry them through the end of fiscal year 2004, they said additional funding will be needed in the future to continue their work on the model procurement program. For example, the staff member who oversees this effort estimated that about $450,000 will be needed in fiscal year 2005 and about $500,000 will be needed in fiscal year 2006. According to USDA staff who worked on developing a biobased products list under the 1998 executive order, assigning responsibility for developing the farm bill biobased procurement guidelines to the New Uses office should have given this effort more agency attention because this office reports to the Chief Economist who in turn reports directly to the Secretary of Agriculture. Previously, work on developing a list of biobased products was split among several line agencies and offices, including the Agricultural Research Service, the Cooperative State Research, Education, and Extension Service, and the Procurement office, that do not enjoy this direct access to the Secretary. However, despite this expectation of greater agency attention, USDA has made limited progress in fulfilling the farm bill requirements; and several USDA officials indicated that this work is not a high priority, relative to other agency initiatives. In addition, stakeholders outside of USDA also believe that the agency has not given sufficient management attention to the fulfillment of the farm bill biobased provisions. For example, representatives of commodity associations and manufacturers stated that although they had hoped for timely and effective procurement guidelines from USDA, the issuance of guidelines has been delayed because this effort is not a priority for the agency. In our earlier work, related to USDA’s implementation of the 1998 executive order, USDA officials indicated that they had made limited progress in publishing a list of biobased products for procurement because of a lack of dedicated resources and higher agency priorities. Although USDA’s issuance of federal procurement guidelines for biobased products, as well as USDA’s establishment of a voluntary labeling program and voluntary recognition program, is now legislatively required, this work still suffers from a lack of adequate resources and management attention. Most federal agencies, testing organizations, commodity associations, and manufacturers we spoke with generally believe that testing biobased products for content and performance is appropriate, but they question the usefulness and costs of life-cycle analysis. According to officials from the top four purchasing agencies and the two testing organizations, content testing is important to ensure that products meet minimum biobased content specifications, and performance testing is a key factor in making purchasing decisions. These officials generally believe that manufacturers should bear the costs of these tests, if they want to sell to the federal government. Biobased manufacturers generally agree with the need for these tests and with their responsibility for bearing at least some of the associated costs. However, some manufacturers said that they should be able to self-certify the biobased content of their products in lieu of content testing, based on their knowledge of their manufacturing processes. Regarding life-cycle analysis, most of the agencies and manufacturers questioned the need for doing this analysis. USDA is required to consider life-cycle costs in determining whether to designate an item for preferred procurement and has indicated that if manufacturers voluntarily provide life-cycle cost information it may help speed the designation process. Manufacturers would only be required to provide this information under the rule as proposed if a procurement official requested the information. However, the agencies generally did not believe that life- cycle information would be useful for purchasing decisions because procurement staff would find the analysis too detailed to follow and generally not useful without comparative information on petroleum-based products; USDA does not expect to provide such comparative information. Manufacturers generally agreed with this view, noting that the cost of life- cycle analysis is high—as much as $8,000 for a single product—and they questioned whether they alone should bear this cost in order to make sales to the federal government. The farm bill authorized USDA to use $1 million per year of the Commodity Credit Corporation’s funds from fiscal year 2002 through fiscal year 2007 for testing of biobased products. Initially, as discussed in its proposed guidelines, USDA plans to use these funds to focus on gathering the necessary test information on a sufficient number of products within an item (generic grouping of products) to support regulations to be promulgated to designate an item or items for preferred procurement. However, the farm bill also allows that these funds may be used to support contracts or cooperative agreements with entities that have experience and special skills to conduct such testing. The $1 million for fiscal year 2002 was used for agreements with testing organizations to establish standardized tests for determining the biobased content and life-cycle analysis characteristics of biobased products. Part of this money also was used to develop a biobased products information Web site. USDA views the establishment of this Web site as integral to fulfilling the farm bill requirement for providing information on products. USDA is using the $1 million for fiscal year 2003 to evaluate selected products using the standardized tests to establish benchmarks for designating items for preferred procurement. The agency is also using some of this money to complete and maintain the information Web site. USDA anticipates that $1 million for fiscal year 2004 will be used to cost-share with manufacturers some of the expenses associated with testing products in order to develop the information needed to designate items for preferred procurement. In general, USDA plans to bear the cost of any testing that may be needed to establish baseline information for designating items. Regarding this testing, in its proposed guidelines USDA indicates that it may accept cost sharing from manufacturers or vendors for this testing to the extent consistent with USDA product testing decisions. However, during this period, USDA will not consider cost sharing in deciding what products to test. When USDA has concluded that a critical mass of items has been designated, USDA will exercise its discretion, in accordance with competitive procedures outlined in the proposed guidelines, to allocate a portion of the available USDA testing funds to give priority to testing products for which private firms provide cost sharing for the testing. At that point, cost-sharing proposals would be considered first for small and emerging private business enterprises. If funds remain to support further testing, proposals from larger firms would also be considered. USDA’s proposed guidelines would require manufacturers and vendors to provide relevant product characteristics information to federal procuring agencies on request. For example, under the proposed guidelines, manufacturers would have to be able to verify the biobased content of their products using a specified standard. In addition, federal agencies would have to rely on third-party test results showing the product’s performance against government or industry standards. Furthermore, manufacturers would have to use NIST’s Building for Environmental and Economic Sustainability (BEES) analytical tool to provide information on life-cycle costs and environmental and health benefits to federal agencies, when asked. USDA recommends that federal agencies affirmatively seek this information. According to officials we contacted from the top four purchasing agencies and the two testing organizations—Iowa State University and NIST— content and performance testing are necessary to help federal agencies make purchasing decisions. Content testing is necessary to ensure that products meet the biobased content specifications for designated items. Furthermore, the results of performance testing are a key consideration, along with product availability and price, for federal procurement officials when selecting a product for purchase, whether the product is biobased or not. These agency and testing organization officials also believe that manufacturers should bear the costs of content and performance testing because these tests are considered normal business costs associated with marketing products. Ten of the 15 biobased manufacturers we contacted agree that content and performance testing are necessary. Two other manufacturers agreed that one of these tests was necessary, but they did not agree on which test. Most of these manufacturers also acknowledged their responsibility for bearing at least some of the costs for these tests. However, some of the manufacturers believe that they should self-certify content, based on their knowledge of their manufacturing process, including the feedstock used. These manufacturers suggested that USDA could conduct random content testing to verify these certifications. Similarly, representatives from the Biobased Manufacturers Association stated that they believe, based on input from their member companies, that manufacturers should self- certify the content of their products. These association officials suggested that content testing should only be required when there is a challenge to these certifications. Most of the manufacturers believed that the requirement for providing performance testing information is reasonable and that, because the cost of this testing is an expected cost of doing business, they should bear this expense. Officials representing the top four procurement agencies, manufacturing companies, the Biobased Manufacturers Association, and commodity associations generally questioned the need for life-cycle analysis of biobased products. Under USDA’s proposed guidelines, manufacturers are invited to voluntarily submit their product to a life-cycle analysis using the BEES analytical tool developed by NIST, so that USDA can obtain information it is required to consider in designating items for preferred procurement. However, once an item has been designated, the manufacturer would have to provide information on life-cycle costs, if asked to do so by a procuring agency, using BEES for their particular product. While some manufacturers indicated that they do not object to performing life-cycle analysis per se, and a few even indicated that they have done such an analysis already to use the results in marketing their product(s), these stakeholders questioned USDA’s decision to rely solely on one analytical tool—BEES—to perform this analysis. Other stakeholders pointed out that any life-cycle analysis results for biobased products would be of limited usefulness without comparable results for similar products that are petroleum based. Stakeholders voiced the following opinions regarding whether life-cycle analysis results are, in general, useful and/or whether USDA should rely solely on the BEES analytic tool for doing this analysis: Many of the officials representing manufacturers and commodity associations believe that federal purchasers will not find life-cycle analysis results for biobased products to be useful unless they have comparable results for competing petroleum-based products. For example, if federal purchasing officials have information on the economic and environmental impacts of a biobased product, but do not have similar information for its petroleum-based alternative, these officials will not be able to determine if the higher initial purchase cost of the biobased product is offset by its lower maintenance and disposal costs and/or lower environmental impacts. Even officials from USDA and the testing organizations acknowledged that the usefulness of BEES results for biobased products would be greater if similar results were available for petroleum-based alternatives. These officials said that although the farm bill does not address life-cycle analysis for petroleum-based products, they hope that manufacturers of these products will submit them to BEES analysis voluntarily so that comparable data are available. However, other stakeholders questioned why a manufacturer of a petroleum-based product would incur this expense voluntarily, especially if the BEES results could cast the manufacturer’s product in an unfavorable light. USDA officials added that procuring agencies could, if they choose, also require manufacturers of petroleum-based products to provide this information in order to make sales to the agencies, but other stakeholders opined that the agencies are not likely to do so because they do not now seek this type of information. USDA officials also noted that to ensure a level playing field it is important that manufacturers and vendors use the same life-cycle analysis tool to ensure consistent and comparable results. Many manufacturer and commodity association officials stated that the cost of the life-cycle analysis was too expensive for most small manufacturers to bear. According to NIST, the cost of testing a product using the BEES analytic tool is about $8,000. The cost of subsequent testing of related products from the same manufacturer is about $4,000 per product tested. For small manufacturers with fewer than 500 employees, the cost of testing is $4,000 for the first product and $2,000 for each additional product, assuming similar processing steps and the continued availability of federal cost-share assistance. Some USDA officials expressed the view that these costs are not exorbitant, adding that the costs of content testing is even cheaper, falling in the range of a few hundred dollars. Federal procurement officials indicated that life-cycle analysis is generally not an important factor in procurement decisions. A product’s price, availability when needed, and ability to meet performance specifications are the most important considerations, according to these officials. In addition, a number of manufacturer and commodity association stakeholders questioned whether procurement officials would even understand the significance of the results of a life-cycle analysis. However, USDA officials noted that the impetus to purchase biobased products also should come from the agency program officials who generate the requirements for the goods and supplies that procurement staff purchase. With this in mind, the Procurement office’s plan for developing the model procurement program includes major tasks related to training and outreach to groups other than just the procurement staff. If these other groups who generate the purchase requirements also understand the potential benefits of biobased products and the legislative requirements for giving these products preference in federal purchasing, then they may stipulate in their purchase requests that procurement staff buy biobased alternatives. Similarly, these groups may stipulate in service contracts that firms purchase and use biobased products. Some manufacturers, citing the detailed nature of the BEES analysis, expressed concerns that trade secrets related to their product could be compromised. However, according to a NIST official primarily responsible for adapting the BEES analytic tool for evaluating biobased products, the information submitted for BEES analysis will not be subject to Freedom of Information Act requests. This official also indicated that contracts made with third-party testing organizations for conducting BEES analysis will include language imposing penalties for improperly divulging product information. In addition, this official said that life-cycle information generated for designating items through the testing of branded products will be aggregated in such a way so as not to reveal the “recipe” (contents and structure) of a given product. USDA has yet to fulfill many of the farm bill biobased procurement requirements. Among other things, USDA has not issued final procurement guidelines that designate items for preferred procurement. USDA’s work has been slowed by the lack of a comprehensive management plan outlining the tasks, milestones, resources, coordination, and reporting needed for its completion. In addition, USDA has not assigned sufficient staff and financial resources or given sufficient priority to this effort to ensure its timely completion. Because other federal agencies’ procurement of biobased products largely hinges on USDA’s fulfillment of these farm bill requirements, USDA action is critical. To ensure USDA’s timely implementation of the farm bill biobased purchasing requirements, we recommend that the Secretary of Agriculture carry out the following three recommendations: Direct the Office of Energy Policy and New Uses to develop and execute a comprehensive management plan for completing this work. Among other things, such a plan should discuss the tasks, milestones, resources, coordination, and reporting needed for completing this work. Clearly identify and allocate the staff and financial resources to be made available for completing this work. Clearly state the priority to be assigned to this work. We provided a draft of this report to USDA for review and comment. We received written comments from the agency’s Chief Economist, which are presented in appendix V. USDA also provided us with suggested technical corrections, which we have incorporated into this report as appropriate. USDA indicated that it believes the report does not present a complete and balanced view of the progress it has made in implementing the farm bill biobased procurement provisions. Specifically, USDA said that the report emphasizes negative interpretations without reflecting the very considerable progress achieved, or how favorably that progress compares with other government efforts to develop preference programs, such as the EPA’s program for the purchase of recycled products. We believe the report provides a fair and accurate description of the farm bill requirements and USDA’s efforts to comply with these requirements to date. The scope of our work did not include a comparison of USDA’s efforts to implement these requirements to the efforts of other agencies to implement other procurement preference programs. However, we have previously reported on EPA’s efforts to implement legislative requirements for the purchase of recycled products, and in doing so we raised issues similar to those we are raising with USDA in this report. Namely, we reported that EPA lacked a comprehensive, written strategy for completing the work and had not given the work adequate staffing and resources and priority. Regarding our recommendation that the New Uses office develop and execute a comprehensive management plan for completing the work needed to fulfill the farm bill biobased purchasing requirements, USDA indicated disagreement. Specifically, USDA said it does not believe such a plan would have accelerated its work on the proposed rule issued in December 2003, given the complexity of the issues that had to be resolved and the substantial amount of consultation across federal agencies and within USDA that was a necessary component of developing this rule. We disagree and continue to believe that USDA should develop a comprehensive, written plan that discusses, among other things, the tasks, milestones, resources, coordination, and reporting needed for completing the work necessary to fulfill the farm bill requirements. Such a plan would also serve as a basis for communicating USDA’s progress with the Congress and others, including the department’s senior management. Furthermore, we believe that factors such as the complexity and breadth of the issues to be considered, the internal and external consultation necessary, and the farm bill’s ambitious time frames for the completion of this work underscore the need for a comprehensive, written plan or strategy for the completion of this work. Finally, we note that another USDA office, the Office of Procurement and Property Management, developed a comprehensive, written plan for the completion of its limited portion of the biobased work. Among other things, this plan discusses the need for consultation, identifies the internal and external stakeholders to consult with, and enumerates specific tasks related to this consultation. Regarding our recommendations that USDA clearly identify and allocate the staff and financial resources to be made available for implementing the farm bill biobased purchasing requirements and clearly state the priority to be assigned to this work, USDA did not address these recommendations directly. However, USDA said that it would draw on GAO’s review and recommendations as it approaches the development of subsequent proposed rules for designating items and for development of the labeling program. We believe that USDA should be more proactive in this regard and make clear the staff and financial resources to be made available for completing this work and the priority to be assigned to this work. These matters could also be addressed in a comprehensive, written plan or strategy for completing the work. We also obtained comments from the DLA, DOE, Interior, EPA, GSA, NASA, NIST, and the Office of the Federal Environmental Executive on excerpts of the report that were relevant to their agencies. Their clarifying comments were incorporated into this report, as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. We will then send copies to interested congressional committees; the Secretary of Agriculture; the Secretary of Energy; the Director, OMB; and other interested parties. We will make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 3841. Key contributors to this report are listed in appendix VI. At the request of the Ranking Democratic Member of the Senate Committee on Agriculture, Nutrition, and Forestry, we reviewed issues related to the federal government’s progress in implementing the biobased purchasing provisions of the Farm Security and Rural Investment Act of 2002 (the farm bill). Specifically, we agreed to examine (1) actions that the U.S. Department of Agriculture (USDA) and other agencies have taken to carry out the farm bill requirement to purchase biobased products; (2) additional actions that may be needed to enhance implementation of this requirement; and (3) views of agencies, manufacturers, and testing organizations on the need for and costs of testing biobased products. To determine the actions USDA has taken to carry out the farm bill requirement for purchasing biobased products and to determine the additional actions that may be needed to enhance implementation of this requirement, we conducted interviews with USDA officials in the Office of Energy Policy and New Uses (New Uses office) and analyzed documents they provided to us. We also contacted officials in other USDA offices, including the Agricultural Research Service; Cooperative State Research, Education, and Extension Service; Office of General Counsel; and the Office of Procurement and Property Management (Procurement office). In addition, we spoke with officials at Iowa State University and the Department of Commerce’s National Institute of Standards and Technology (NIST) who are developing testing standards for biobased products under agreements with USDA. Furthermore, we reviewed USDA’s Guidelines for Designating Biobased Products for Federal Procurement, a proposed rulemaking published in the Federal Register on December 19, 2003. Related to this rulemaking, we attended two public meetings held by USDA in Washington, D.C.: a biobased workshop held on October 28, 2003, to discuss USDA’s use of biobased products and the status of the proposed rulemaking and a meeting on January 29, 2004, to allow the public an opportunity to comment on the proposed rule. To determine the actions that other federal agencies have taken to carry out the farm bill requirement to purchase biobased products, we interviewed officials at the top four procuring agencies—the Department of Defense (DOD), the Department of Energy (DOE), the General Services Administration (GSA), and the National Aeronautics and Space Administration (NASA)—and analyzed the documents that they provided to us. These agencies account for the majority—about 85 percent—of the federal government’s purchasing; the DOD alone accounts for about 67 percent of federal purchasing. The officials we contacted included program staff who identify purchasing requirements and procurement staff who make the purchasing decisions, including the selection of vendors and products used. They also included environmental management or health officials who may be responsible for promoting the use of biobased products at their agencies. We also interviewed officials at DOE, the Defense Logistics Agency, the Environmental Protection Agency (EPA), GSA, NASA, the Office of Management and Budget’s (OMB) Office of Federal Procurement Policy (OFPP), and the White House’s Office of the Federal Environmental Executive to determine the extent to which USDA had coordinated with these agencies in implementing the farm bill biobased purchasing requirement. To obtain the views of federal agencies, testing organizations, manufacturers, environmental groups, consumer groups, an advocacy group, and commodity associations on the need for and costs of testing biobased products, we contacted the following entities: Federal agencies: DOD, DOE, EPA, GSA, NASA, OFPP, and White House’s Office of the Federal Environmental Executive. Testing organizations: Iowa State University and NIST. Manufacturers: Biobased Manufacturers Association and 15 biobased products manufacturers from a list of member companies provided by the association. The manufacturers chosen represent a cross section of biobased products—at least one producer in each of the 11 biobased item categories proposed by USDA—and feedstock (e.g., corn, soybeans, vegetable oils, etc.). They are also geographically dispersed: Arizona, California, Florida, Iowa, Illinois, Maryland, Massachusetts, Minnesota, Ohio, Texas, Washington, and Wisconsin. Environmental groups: Environmental and Energy Study Institute and Green Seal. Consumer groups: Center for the New American Dream and Consumer’s Choice Council. Advocacy group: New Uses Council. Commodity associations: American Soybean Association, National Corn Growers Association, and the United Soybean Board. Most of our contacts with these entities occurred prior to USDA’s publication of its guidelines for designating biobased products for procurement in December 2003, although we also obtained information from some of these contacts after this document was published. In either case, in our interviews with these sources we sought their views on what the proposed guidelines should contain. In addition, for manufacturers of biobased products, we sought information on their experiences in selling to the government, including any impediments encountered. We also sought their views on the types of testing that should be done on biobased products; the associated costs of these tests; how testing costs should be paid; and how available federal funding for testing should be used. We summarized and contrasted the views of the various stakeholders. In general, our work focused on biobased products other than biofuels such as ethanol, biodiesel, and biogas because provisions to promote the production of biofuels are addressed elsewhere in the farm bill. However, some mention of biofuels was unavoidable in discussing the nature and importance of biobased products, including their effect on carbon in the environment and on their potential economic impact on farms and rural communities. We conducted our review from May 2003 through February 2004 in accordance with generally accepted government auditing standards. The following list provides the names, addresses, and Web sites for sources of information on biobased products used in our work. This appendix summarizes key provisions of USDA’s notice of proposed rulemaking, Guidelines for Designating Biobased Products for Federal Procurement, published in the Federal Register (69 Fed. Reg. 3533) on December 19, 2003. Specifically, table 2 describes proposed biobased product categories and the items to be included in each as discussed in the preamble to the proposed guidelines. Table 3 enumerates other key provisions proposed in the notice. The following are GAO’s comments on the U.S. Department of Agriculture’s letter dated March 23, 2004. 1. On page 29 of the draft report (now p. 28), we state USDA’s view that their progress compares favorably to EPA’s implementation of its program for the purchase of recycled products. We also state that a comparison of USDA’s efforts to implement the biobased procurement provisions in section 9002 of the farm bill with government efforts to develop other preference programs, such as EPA’s program for the purchase of recycled products, was outside the scope of our work. However, we have previously reported on EPA’s efforts to implement this program. Specifically, in May 1993, we reported that EPA’s efforts were slowed by a lack of a comprehensive, written strategy for completing this work. Among other things, we noted that such a strategy would lay out funding and staff needs, goals and milestones, information and coordination needs, and a systemic approach to selecting items for procurement guidelines. We also noted that this strategy would serve as a basis for communicating EPA’s progress to the Congress and others, including the agency’s senior management. In addition, we reported that EPA’s efforts to fulfill the legislative provisions for the purchase of recycled products lacked priority and adequate staffing and resources, and because of the agency’s slow progress in identifying recycled products for preferred procurement, other federal procuring agencies had made little progress in developing their own affirmative programs for the purchase of these products. The conference report for the farm bill notes that the new program for the purchase of biobased products by federal agencies is modeled on the existing program for the purchase of recycled materials. Presumably, there are lessons to be learned from EPA’s experience in implementing the recycled program. However, more than 10 years after the issuance of our earlier report, we are now raising similar concerns regarding USDA’s implementation of the farm bill biobased procurement provisions. 2. USDA is correct in stating that we do not offer an opinion on whether the farm bill time frame for full implementation of the biobased procurement program is realistic. This is a matter that USDA must address with the Congress. However, we do offer our views on how this implementation process might be accelerated. Regarding the specific factors that USDA cites as slowing this process, we believe these factors are adequately discussed in the draft report. On page 29 (now p. 28), we acknowledge that the complexity and novelty of the issues that USDA faces are challenging. On page 26 (now p. 25), we state that the farm bill requires USDA to consult with other agencies, including EPA, GSA, and NIST. On page 28 (still p. 28), we also state that USDA provided us a list of work activities indicating that it conducted external consultations with other agencies during the summer and fall of 2002. On page 30 (still p. 30), we state that the farm bill did not specifically authorize funds for developing the biobased procurement guidelines. And on page 17 (now p. 16), we note that a number of rulemakings will be necessary to fulfill the farm bill biobased purchasing requirements and that the issuance of these rulemakings will take years to complete. We also describe on that page the steps in the rulemaking process. Furthermore, we make other statements in the draft report that reflect the difficulties USDA faces. For example, on page 4 (now p. 5) we state that USDA faces a formidable challenge in implementing the farm bill provisions for purchasing biobased products. On page 14 (still p. 14), we state that USDA was faced with an ambitious task regarding these provisions. And on page 25 (still p. 25), we note that USDA officials said that delays may result from having to work through the various concerns and conflicting views of the many stakeholders to this effort, a process that one official described as akin to swimming in molasses. 3. We believe that factors such as the complexity and breadth of the issues to be considered, the internal and external consultation necessary, and the ambitious time frames for completing the work underscore the need for a comprehensive, written plan or strategy for the completion of this work was and is necessary. 4. We did not ask for “a particular style of plan.” Beginning with our entrance meeting with USDA officials in May 2003, we asked for a copy of any written plan these officials had prepared that described how they intended to complete the work necessary to fulfill the farm bill biobased requirements. At that meeting, officials from the Office of Energy Policy and New Uses (New Uses office) stated that they did not have a written plan for this work, although the work had been ongoing for nearly a year. Approximately 9 months later, at our exit meeting with USDA officials in February 2004, officials from the New Uses office provided us a draft document dated June 2002 as evidence of their planning. In our view, this document falls far short of being a comprehensive plan for completing this work, as discussed on pages 27 to 28 of the draft report (still pp. 27 to 28). New Uses staff neither mentioned the existence of an “adaptive plan composed of several parts” during our work—May 2003 through February 2004—nor did they provide us documentation of this plan. In contrast, another USDA office, the Office of Procurement and Property Management (Procurement office), developed a comprehensive, written plan for the completion of its limited portion of the biobased work, which it provided to us in January 2004, soon after it identified funds to begin this work. 5. After officials of the New Uses office told us in May 2003 that they did not have a written plan, we asked these officials if they had developed a list of tasks and associated milestones for their work. These staff indicated they had not done so, but would create this list for us. At the time, these staff indicated it would take them 2-3 weeks to develop this information. We received this timeline about 3 weeks later, in early June 2003. 6. Other than the plan prepared by the Procurement office for its limited portion of the work, we have seen no evidence that USDA— specifically the New Uses office—has a comprehensive, written plan for completing this work. 7. We agree that in developing a plan it is not possible to anticipate every exigency. However, agencies frequently prepare “formal definitive” plans without being able to anticipate every possible exigency, including planning documents related to the Government Performance and Results Act, such as strategic and annual performance plans, and planning documents related to the day-to-day activities of agencies, such as the implementation of programs, legislative initiatives, and other activities. USDA appears to draw a distinction between consultations and planning—that consultations must precede planning. We believe that the need for consultations, including how these consultations will be done and documented, should be addressed along with other considerations in a comprehensive, written plan for completing the work needed to fulfill the farm bill biobased requirements. We note that the Procurement office addressed the need for consultations in the management plan it prepared for completing its portion of the biobased work. 8. On page 28 of the draft report (still p. 28), we state that USDA provided us a list of work activities indicating that it conducted external consultations with other agencies during the summer and fall of 2002. During our work, we discussed coordination issues with the agencies cited by USDA, as noted on page 26 of the draft report (now pp. 25 to 26). In light of comments received from these other agencies on relevant excerpts of the draft report, the report has been clarified to identify some of the concerns these agencies cited. 9. On pages 26 to 27 of the draft report (now p. 26), we state that the New Uses staff reports to the Chief Economist in periodic staff meetings and that this official periodically briefs the Secretary of Agriculture. The report has been clarified to reflect the frequency of these meetings and other reporting cited by USDA. However, we continue to believe that without a comprehensive, written plan for completing the biobased work, it is difficult for managers to put into context the relative progress being reported, to identify needed adjustments, and to hold accountable the officials responsible for the work’s completion. 10. The draft report does not suggest that there were long periods when work was not progressing on the implementation of the biobased procurement program. However, the draft report does raise issues on whether this work has progressed efficiently in the absence of a comprehensive, written plan for its completion and a commitment of sufficient staff and financial resources and management attention. 11. The report has been adjusted to make clear that the delay in receiving assistance from another office to help draft the Federal Register notice did not prevent other aspects of the work from proceeding. 12. On page 28 of the draft report (still p. 28), we state USDA provided us a list of work activities indicating that it conducted external consultations with other agencies during the summer and fall of 2002. 13. On page 44 of the draft report (now p. 43), we state that most of our audit work was done prior to USDA’s publication of its proposed rule in December 2003. This was a function of our need to be responsive to our requester’s time frames for completing the work and delays in USDA’s issuance of the proposed rule. However, subsequent to the rule’s publication, we also obtained relevant information and views from some contacts, including commentary on the proposed rule posted in newsletters or on Web sites of organizations such as the Biobased Manufacturers Association. In addition, we attended the public meeting held on January 29, 2004, at USDA headquarters in Washington, D.C., in which stakeholders orally offered comments on the rule. 14. The public comment period closed on February 17, 2004. USDA is currently analyzing and summarizing these comments. Eventually, USDA will discuss these comments in its final rulemaking for the biobased procurement guidelines. 15. The report does not criticize the testing of life-cycle cost analysis and environmental and health effects as part of the proposed rule. The report reflects the views of a variety of relevant stakeholders regarding this and other testing issues. In a number of cases, these stakeholders offered negative or critical views, or otherwise expressed concerns. The report accurately reflects these views. 16. In reviewing a copy of the Senator’s letter, we also note that he expressed several concerns. For example, he stated that USDA is many months behind the schedule Congress laid out for biobased product purchasing in the farm bill. Regarding testing, the Senator said that the BEES model should probably not be the only model allowed or required for life-cycle analysis of biobased products; he noted that the statute does not require it and that agencies themselves could determine which tests are necessary and incorporate them into their procurement guidelines. In addition, the Senator said that this information would be of little value to procurement agents if they do not have comparable life-cycle analysis results for petroleum-based counterparts. Furthermore, the Senator expressed concerns about the potential cost of testing on small and large businesses, suggested that biobased content be self-certified, and noted that agencies could require BEES analysis or other third-party testing in the event it is warranted, such as when the veracity of a manufacturer’s claim is in dispute. 17. The report accurately states that USDA has fallen short in implementing the farm bill biobased purchasing requirements. The report accurately describes the content of the proposed rule, including what is addressed specifically in the proposed guidelines or in the preamble to these guidelines. It is factual that the proposed guidelines do not designate any items for preferred procurement or include the voluntary labeling program. 18. The report states the time likely to be required to designate the items that USDA identified in the preamble to the proposed rule. This information is based on a timeline furnished by USDA. 19. On pages 18 to 22 of the draft report (now pp. 18 to 21), we accurately reflect the views of some agency officials who believe that the advantages of biobased hydraulic fluids and lubricants are (1) the reduced cost and effort of cleanups of product spills, as compared with fossil resource-based alternatives and/or (2) the ease of disposal because these products are biodegradable. However, as noted on page 22 (fnt. 29) of the draft report (now p. 21, fnt. 31), we discussed these views with EPA. The Director of EPA’s Oil Spill Staff stated that the agency had not made a specific ruling regarding how spills of biobased hydraulic fluids and lubricants should be handled; in the absence of a ruling, this official said that EPA does not make a distinction between spills of these biobased products and their petroleum-based alternatives. In addition to the individuals named above, Jeanne Barger, Rani Chambless, and Carol Herrnstadt Shulman made key contributions to this report. Important contributions were also made by Oliver Easterwood, Lynn Musser, Anne Stevens, Amy Webbink, and Linda Kay Willard. Federal Procurement: Government Agencies Purchases of Recycled– Content Products. GAO-02-928T. Washington, D.C.: July 11, 2002. Federal Procurement: Better Guidance and Monitoring Needed to Assess Purchases of Environmentally Friendly Products. GAO-01-430. Washington, D.C.: June 22, 2001. Solid Waste: Federal Program to Buy Products With Recovered Materials Proceeds Slowly. GAO/RCED-93-58. Washington, D.C.: May 17, 1993. Solid Waste: Progress in Implementing the Federal Program to Buy Products Containing Recovered Materials. GAO/T-RCED-92-42. Washington, D.C.: Apr. 3, 1992.
The federal government spends more than $230 billion annually for products and services to conduct its operations. Through its purchasing decisions, it has the opportunity to affirm its policies and goals, including those related to purchases of biobased products, as set out in the 2002 farm bill. A biobased product is a commercial or industrial product, other than food or feed that is composed of, in whole or part, biological products, renewable domestic agricultural materials, or forestry materials. GAO examined (1) actions the U.S. Department of Agriculture (USDA) and other agencies have taken to carry out farm bill requirements for purchasing biobased products, (2) additional actions that may be needed to implement the requirements, and (3) views of stakeholders on the need for and costs of testing biobased products. GAO interviewed officials from USDA, major procuring agencies, testing entities, interested associations, and 15 manufacturers of biobased products. USDA and other federal agencies' actions to implement the farm bill requirements for purchasing biobased products have been limited. USDA issued proposed procurement guidelines in December 2003--more than 1 year past the deadline for final guidelines; however, these guidelines do not fully address the farm bill requirements for designating items for purchase and recommending procurement practices. USDA expects to issue final guidelines by April 2004 and a blueprint for the model procurement program by September 2004; but it anticipates that designation of existing items will take years to complete, possibly until 2010. In addition, new items will enter the market requiring further designations. Meanwhile, purchasing agencies do not yet have a basis for planning their own procurement programs and, as a result, have made only limited purchases of biobased products. USDA could accelerate its implementation of the farm bill requirements by developing a comprehensive management plan for this work and by making the work a higher priority. The lack of a management plan describing the tasks, milestones, resources, coordination, and reporting needed to complete this work has slowed USDA in issuing the procurement guidelines. For example, USDA developed a list of milestones only after GAO requested such a list; even then, this list was informal, primarily reflecting the thinking of a few officials. Without a plan, USDA will find it difficult to set priorities, use resources efficiently, measure progress, and provide agency management a means to monitor this progress. According to stakeholders, USDA should make this work a higher priority to speed its completion. Without a sense of priority, USDA's efforts to fulfill farm bill requirements have not had adequate staff and financial resources. Stakeholders GAO spoke with generally believed that USDA's proposals for testing a biobased product's content and performance are appropriate and that manufacturers should bear at least some of the costs. However, stakeholders generally questioned the need for doing life-cycle analysis of a product's long-term costs and environmental impacts.
The federal government manages more than 680 million acres of land in the United States, including lands in national forests, grasslands, parks, refuges, reservoirs, and military bases and installations. Of the total federal lands, BLM and the Forest Service manage almost 450 million acres for multiple uses, including timber harvest, recreation, grazing, minerals, water supply and quality, and wildlife habitat. BLM’s 12 state offices manage more than 260 million acres in 12 western states, including 82 million acres in Alaska, while the Forest Service’s 123 administrative offices manage more than 190 million acres across the nation. As shown in figure 1, the majority of federal lands are located in the western half of the country. The remaining lands are managed by the following agencies for different purposes: Interior’s National Park Service manages more than 350 national parks, monuments, seashores, battlefields, preserves, and other areas on 84 million acres of federal land; the U.S. Fish and Wildlife Service manages more than 540 national wildlife refuges and 37 large multiple-unit wetland management districts on more than 96 million acres of land; and Reclamation manages about 8.5 million acres of land associated with water projects in 17 western states. DOE manages almost 2.4 million acres of land making it the fourth largest federal land owner after Interior, USDA, and DOD. It operates 30 major facilities on land holdings in 34 states. The buffer zones surrounding many of these facilities consist of forests and rangelands. DOD has numerous Army, Air Force, and Navy installations on 29 million acres of land in many states, while the Corps, like Reclamation, manages 12.7 million acres of land associated with water projects in many states. Most rangelands—primarily grasslands and shrublands—used to raise livestock in the United States are privately owned, and as a result, only a portion of livestock is raised on federal land. In 2004, the livestock industry had almost 95 million cattle and 989,460 cattle and calf operations, which include cattle raised for beef as well as milk. Regionally, the eastern states had almost 590,000 cattle and calf operations, of which almost 440,500 were beef cow operations; the states in the Great Plains (Nebraska, Kansas, Oklahoma, North and South Dakota, and Texas) had 292,300 cattle and calf operations with 253,000 beef cow operations; and the 11 western states had more than 106,000 cattle and calf operations with about 80,400 beef cow operations. In contrast, the number of livestock operations with BLM and Forest Service grazing permits and leases for cattle, sheep, and other livestock totaled more than 23,000. Livestock operations in the West differ from those in the eastern United States. In the West, livestock operations involve larger areas of land, and ranchers depend on a mix of private and federal lands to graze cattle seasonally—in the summer and fall they use federal lands to graze their livestock while they grow hay crops for the winter on their private lands. In some parts of the West, primarily the Southwest, grazing occurs year-round on federal lands. In the East, sufficient rain allows grazing to occur on smaller pastures, in some places, year-round. The country’s rangelands have been used to graze domestic livestock since the United States was settled, and the federal government has managed grazing on federal lands for more than 100 years. During western expansion, settlement typically occurred along streams and rivers, where the soil is richer, vegetation denser, and water more available. Lands that remained for the federal government to manage after western expansion were lands that settlers did not want or could not easily settle; the lands are often drier, less productive, and located at higher elevations or farther from water. As the West was settled throughout the late 1800s, conflict among different users of the rangelands increased, as did degradation of these lands. As a result, in 1897, the federal government began managing livestock grazing in the nation’s forest reserves; in 1906, the Forest Service started charging a fee for grazing on these reserves. The Forest Service managed grazing under its general authorities until 1950, when Congress enacted the Granger-Thye Act, authorizing the Secretary of Agriculture to issue grazing permits on national forest lands and other lands under the department’s administration. In addition to national forest lands on which grazing is allowed in the 16 western states, the Forest Service manages national grasslands in the western states and forest lands in the eastern states for grazing. The federal government started purchasing privately owned land in 1911 as necessary for regulating the flow of navigable streams, creating national forests in the East. The national grasslands, which are primarily located in Colorado, Kansas, New Mexico, and North and South Dakota, were purchased by the federal government under a land utilization program started in the 1930s. Originally, the program purchased submarginal lands to provide emergency relief to farmers whose lands were failing. It evolved into a program designed to transfer land to its most suitable use, culminating in the Bankhead-Jones Farm Tenant Act of 1937. In 1954, the Secretary of Agriculture transferred the responsibility for program administration to the Forest Service and in 1960 designated almost 3.8 million acres of lands in the program as national grasslands. To stop continued degradation caused by overgrazing of the remaining public lands, among other purposes, the Congress passed the Taylor Grazing Act in 1934. Under the act, the predecessor to BLM—the Grazing Service—was created, and control over grazing on public lands was established. The Taylor Grazing Act authorized the establishment of grazing districts from public lands that were considered to be chiefly valuable for grazing and raising forage crops and the leasing of other public lands that were located outside grazing districts. The act also provided for the issuance of permits and leases for these lands and set forth requirements for the distribution of funds received from grazing. Additional laws affecting grazing on both BLM and western Forest Service lands were enacted in the 1970s. The Federal Land Policy and Management Act of 1976 (FLPMA) limited the length of permits and leases to 10 years and allowed shorter terms, authorized terms and conditions to be placed on a permit or lease, and allowed seasonal limits on grazing. In 1978, PRIA required BLM and the Forest Service to inventory and manage their lands in western states. To provide access to grazing, both BLM and the Forest Service divide their rangelands into allotments, which can vary in size from a few acres to hundreds of thousands of acres of land. Because of the land ownership patterns that occurred when the lands were settled, the allotments can be adjacent to private lands, or they can be intermingled with private lands. Under its authorities, BLM permits grazing in allotments within its grazing districts and leases lands outside grazing districts. The Forest Service, which does not have grazing districts, uses permits to authorize grazing in its allotments. To be eligible for a permit or lease on one of BLM’s allotments, ranchers, among other things, are required to own or control land or water, called a base property. Under Forest Service guidance, permits are issued to purchasers of permitted livestock or base property. The other federal agencies that manage grazing do not have the same grazing authorities, processes, or fees as BLM and the Forest Service. Each agency manages its grazing for different purposes and under different authorities. For example, the U.S. Fish and Wildlife Service permits grazing on a year-to-year basis, depending on a refuge’s land management goals, while the National Park Service permits grazing for a longer period but can choose to not renew a permit if certain conditions change, including damage to park resources, limitations to interpretive experiences, or impairment of park facilities. Federal grazing fees are considered as user fees. Without statutory authority to charge a fee and retain the proceeds, a federal agency may not charge a fee to defray the cost of services or resources it provides. Congress has provided some agencies with specific authority to charge a user fee and retain and use the proceeds. If an agency does not have specific authority, the IOAA provides general authority for an agency to impose a fee if certain conditions are met. However, even if the user fee applies, an agency may not retain the proceeds from a user fee without specific authority to that effect, but must credit the collections to the general fund of the Treasury as miscellaneous receipts. OMB Circular A-25 provides guidance to agencies regarding their imposition of user fees under the IOAA and other statutes. Under the circular, federal agencies that do not have specific authority to impose a fee are to charge user fees pursuant to the IOAA when an individual or a group receives benefits—such as those that provide business stability or respond to an individual or a group’s request—that are greater than those that the general public enjoys. Increasingly since the 1980s, to relieve pressure on taxpayers for increasing general appropriations for the federal government, user fees have been levied to help pay for federal services and resources that benefit specific groups of users. User fees differ from broad-based taxes in that they attempt to recover some amount of the government expenditures made for a specific program. For example, Congress enacted laws to increase the use of recreation fees for access to federal parks, forests, and BLM lands in the 1990s. While agencies are generally to deposit funds they receive in the general fund of the Treasury under the Miscellaneous Receipts Act, some federal agencies have specific legislative authority to distribute funds to states and counties or to deposit funds into special accounts in the Treasury for the agency’s or program’s use. Generally, funds that are deposited into the Treasury as miscellaneous receipts are deposited in the general fund where they are then available to be appropriated as Congress may see fit. Funds that are deposited into special accounts in the Treasury are dedicated for specific purposes. The special accounts may be permanently appropriated or further congressional action may be needed to make the funds available. Some agencies are also authorized to retain funds for credit to their appropriations. In fiscal year 2004, BLM, the Forest Service, the National Park Service, U.S. Fish and Wildlife Service, Reclamation, DOE, the Army, the Corps, Air Force, and Navy allowed more than 22.6 million AUMs of grazing on about 235 million acres of the lands they manage. BLM and the Forest Service managed most of this grazing activity, allowing almost 21.9 million AUMs on almost 231 million acres, or more than 98 percent of the grazed lands. The remaining eight agencies allowed almost 794,000 AUMs of grazing on more than 4 million acres. While the agencies’ grazing programs are similar in that they offer private ranchers access to federal lands and vegetation for their livestock, agencies manage their grazing programs under different authorities and for different purposes. As table 1 shows, in fiscal year 2004, BLM and the Forest Service approved a total of almost 21.9 million AUMs for grazing on more than 230.6 million acres—BLM approved almost 12.7 million AUMs on more than 137.7 million acres, and the Forest Service approved almost 9.2 million AUMs on more than 92.9 million acres. Ranchers were billed for and used fewer AUMs—a total of almost 13.7 million AUMs—primarily because of the continuing drought in the western and southwestern states, according to agency officials. While BLM maintains a list of historical AUMs—or grazing privileges that have been reduced from historical amounts and are not available to be used—these numbers do not affect the totals. As table 1 shows, BLM’s and the Forest Service’s responsibilities for managing grazing varied considerably by state office or Forest Service region. The BLM Nevada state office had the most grazing in fiscal year 2004, in terms of both acres and approved AUMs, while Montana had the most grazing in terms of billed AUMs; the California state office had the least grazing, in terms of both acres and approved AUMs. For the Forest Service, the Intermountain Region, which includes Utah, Nevada, and portions of Idaho and Wyoming, had the most grazing, while the Eastern and Southern regions had the smallest amounts of grazing. Appendix III contains the detailed extent of grazing for each BLM field office within each state office and Forest Service administrative office. Grazing is allowed on BLM and Forest Service lands for the purpose of fostering economic development for private ranchers and ranching communities by providing ranchers access to additional forage. Particularly in the western states, where the agencies manage anywhere from 30 to almost 85 percent of the land, access to federal forage increases the total forage available to ranchers, enabling them to increase the number of livestock they can support and sell. Under FLPMA, the Taylor Grazing Act, and the Granger-Thye Act, BLM’s and the Forest Service’s permits and leases are set for not more than 10 years and can be renewed without competition at the end of that period, which gives the permittee or lessee a priority position against others for receiving a permit or lease—a position called “preference.” While ranchers have preference, they do not obtain title to federal lands through their grazing permits and leases, nor do they have exclusive access to the federal lands, which are managed for multiple purposes or uses. In fiscal year 2004, the National Park Service, Reclamation, U.S. Fish and Wildlife Service, DOE, and DOD services managed about 794,000 AUMs of grazing on more than 4 million acres of land. Table 2 shows the extent of grazing. As table 2 shows, the extent of grazing on the eight agencies’ lands varied considerably in fiscal year 2004, with the National Park Service managing grazing on about 1,580,000 acres, while the Navy managed almost 16,000 acres. In terms of approved AUMs, the U.S. Fish and Wildlife Service managed the most—more than 199,000 AUMs—while DOE allowed about 13,000 AUMs. The eight agencies presented in table 2 manage or allow grazing for different purposes, as the following discussion details: National Park Service. The agency is authorized to allow grazing within any national park, monument, or reservation as long as such use is not detrimental to the primary purpose for creating the park, monument, or reservation. Agency regulations prohibit grazing except as (1) specifically authorized by statute, (2) required under a reservation of use rights arising from the acquisition of a tract of land, (3) required in order to maintain a historic scene, or (4) conducted as an integral part of a recreational activity. For example, in Virginia and North Carolina, the agency allows grazing at Blue Ridge National Parkway—about 5,000 AUMs of cattle on more than 2,000 acres—to maintain a historic scene. In contrast, at the Appomattox Court House National Historical Park, the agency allowed grazing on almost 200 acres to maintain a desirable grass level. Grazing is managed as a special park use, requiring a permit, lease, concession, contract, or commercial use authorization. Each park superintendent approves or disapproves requests for special park uses, such as grazing, and can impose conditions to protect park resources and values and visitors and the visitors’ experience. In fiscal year 2004, the National Park Service reported that grazing was permitted to occur at 31 of its parks, with Glen Canyon National Recreation Area, in Utah and Arizona, accounting for the most acres—almost 666,000—and Point Reyes National Seashore, in California, accounting for the most AUMs—about 18,500 AUMs on about 24,000 acres. U.S. Fish and Wildlife Service. The National Wildlife Refuge System Administration Act of 1966 authorizes various uses of U.S. Fish and Wildlife Service lands, including grazing, as long as the agency determines that such use is compatible with the major purposes for which the refuge was established. The agency uses grazing as a tool to manage habitat. For example, in the Anahuac, McFaddin, and Texas Point National Wildlife Refuges, along the Texas Gulf Coast, the agency allowed livestock grazing from October to April, the cool season of the year, to encourage different types of marsh grasses, generate annuals, and increase vegetative diversity, thereby opening up additional habitat for foraging waterfowl. In fiscal year 2004, the U.S. Fish and Wildlife Service reported that livestock grazing occurred on 94 of its refuges and wetland management districts, ranging from 25 AUMs on 60 acres at Detroit Lakes Wetland Management District in Minnesota to about 21,500 AUMs on 450,000 acres at the Charles M. Russell National Wildlife Refuge in Montana. Reclamation. Reclamation allows its lands to be used for incidental purposes, such as recreation and grazing, as long as such uses do not interfere with the operation of the dams or irrigation works associated with these projects. In general, Reclamation allows grazing on its project lands when asked to do so by users, such as ranchers who have had historical access to the lands or wildlife managers wanting to improve habitat. For example, the Albuquerque Area Office allows grazing on more than 19,000 acres in the Brantley and Avalon Reservoirs project area, thereby allowing ranchers access to lands that they historically grazed. In fiscal year 2004, Reclamation reported that it permitted and leased lands for grazing at 36 of its facilities in 16 area offices, with the agency managing some of the permits and leases and other agencies, such as BLM, the U.S. Fish and Wildlife Service, or local and state agencies managing additional permits and leases under joint management agreements. For example, in central Washington state, BLM manages grazing on more than 8,000 acres of Reclamation land that is adjacent to BLM land in the Columbia Basin Project. In the same area, the Washington Department of Fish and Wildlife manages grazing on almost 18,000 acres of Reclamation land to improve vegetation and thereby enhance bird habitat. In total, in fiscal year 2004, Reclamation issued permits and leases for about 91,000 AUMs of grazing on almost 737,000 acres—almost 44,000 AUMs and about 238,000 acres under Reclamation’s management and about 47,000 AUMs and about 499,000 acres managed by agreement with other agencies. DOE. The department allows grazing on only one site, the Idaho National Laboratory. Under the Taylor Grazing Act, the Secretary of the Interior is authorized, by order and with the approval of the relevant department, to establish grazing districts of certain public domain lands that are not in national forests, parks, or monuments. In Idaho, Interior, with the agreement of DOE, issued such an order, and livestock grazing continues on approximately 50 percent of the Idaho National Laboratory site. BLM manages the land as part of its grazing program but is to follow the security and land access requirements set by DOE. DOD. Under 10 U.S.C. § 2667, the Secretaries of the Army, Air Force, and Navy are authorized to lease property under their control that is not excess property, if it will promote national defense or be in the public interest. The military services use this authority to lease rangelands on military installations and bases for grazing, among other uses. For example, the Air Force leases to nearby ranchers land that forms a buffer around the Melrose Air Force Range at Cannon Air Force Base in New Mexico. The buffer consists of rangelands surrounding target areas used in training exercises and protects more developed areas from stray (unarmed) bombs. According to Air Force staff, leasing the land to ranchers does not hinder training exercises, but it does provide access to grazing for neighboring landowners and to maintain rangeland, by keeping grass low, to control fire. Similarly, Fort Hood in Texas allows grazing on lands used for armored vehicle training maneuvers. The Army determined that grazing cattle could be compatible with training exercises, although uncertainty remains about the intensity of grazing that can be allowed, given the need to let vegetation recover from training exercises, and hence, reduce soil erosion into nearby streams and reservoirs. Like the Army, Air Force, and Navy, the Corps manages grazing on its lands under 10 U.S.C. § 2667. In fiscal year 2004, the DOD military services leased about 494,000 acres for grazing, and the Corps leased about 169,000 acres. Federal agencies spent at least $144.3 million in direct and indirect expenditures to support grazing activities on federal lands in fiscal year 2004. The 10 federal agencies spent at least $135.9 million, of which the Forest Service and BLM spent the majority of funds, about $132.5 million. The 8 remaining agencies spent at least $3.4 million on their grazing programs, but not all of the agencies could estimate their expenditures because they do not conduct grazing as a major activity and therefore do not specifically track grazing expenditures. The 10 agencies spent funds on activities that directly supported grazing, such as managing permits and leases, monitoring resource conditions on grazing allotments, assuring permit and lease compliance, and implementing range improvements such as developing water sources and constructing fences. The agencies also spent funds on activities that indirectly supported grazing, such as management, budget, and personnel. In addition to these 10 agencies’ expenditures, other federal agencies that do not have grazing programs spent at least $8.4 million to support grazing on public lands. While some of these agencies could identify their expenditures related to grazing on public lands, not all agencies could do so because they do not distinguish between work done on public and private lands. These agencies spent funds on activities related to grazing, such as grazing litigation, threatened and endangered species consultations for grazing plans, and the removal of predatory or nuisance wildlife from grazing lands. Because some agencies do not track their grazing expenditures on public lands specifically, the expenditures presented are a conservative estimate of federal grazing expenditures; expenditures would most likely be higher if these agencies could provide estimates. BLM and the Forest Service spent about $132.5 million to manage their grazing programs in fiscal year 2004—BLM spent more than $58.3 million, and the Forest Service spent almost $74.2 million. As shown in table 3, the agencies spent these funds on both direct, indirect, and range improvement activities. BLM has implemented a cost-management system that identifies direct and indirect expenditures and used it to identify its direct and indirect expenditures in fiscal year 2004. Unlike BLM, the Forest Service does not have a cost-management system, but rather reports expenditures for items in its budget, called budget line items. The agency uses its Foundation Financial Information System to centrally track and formally report expenditures. For fiscal year 2004, the Forest Service used expenditure reports for grazing and related line items, in addition to its WorkPlan system that shows its intended work plans for the fiscal year, to identify the amount of expenditures. In fiscal year 2004, the agencies generally included the same activities in reporting their expenditures. Both BLM and the Forest Service included managing grazing permits and leases, monitoring resource conditions on grazing allotments, conducting environmental assessments for allotments, and managing grazing fees as direct expenditures. Both agencies included expenditures that specifically related to grazing management, rather than broader range management expenditures, because grazing activities are distinct from more general rangeland management activities. According to agency officials, many range management activities need to be conducted whether or not grazing occurs. For example, monitoring rangeland conditions through vegetation surveys supports work that the agencies conduct to manage noxious weeds. While some noxious weeds may occur on federal lands as a result of livestock grazing, some can be transported by other means. Although both agencies spent funds on land management planning to support their specific grazing plans and activities, neither agency included land management planning expenditures. According to BLM and Forest Service officials, land management planning and environmental impact statements are important enough to be a separate direct expenditure from grazing and would continue to occur if the agencies no longer permitted or leased grazing activities on their lands. Furthermore, according to agency officials, land management planning encompasses all activities—including livestock grazing—conducted by BLM, at the field office level on public lands, or by the Forest Service, at the national forest level for all national forest system lands. Even if grazing activities were not conducted, other range management activities, such as oil and gas leasing and off-road vehicle use, would still need to be planned and studied. For indirect grazing activities in fiscal year 2004, BLM spent almost $18.7 million, and the Forest Service spent an estimated $13.3 million. Indirect activities are those that cannot be specifically attributed to grazing because they also benefit other resource programs. These include activities such as administrative activities, infrastructure, or technical support. One method of allocating indirect expenditures is to pool the activities and allocate the related expenditures across all the programs that use the activities. BLM allocated its indirect expenditures using its cost-management system. The system allocated expenditures for such activities as management, state office expenditures, and BLM office expenditures in fiscal year 2004. Because the Forest Service does not have a cost-accounting system, it allocates its budget according to potential indirect expenditures. The Forest Service has six cost pools, into which it allocates a percent of each of its budget line items for the fiscal year to be used to cover indirect expenditures during the year. BLM and the Forest Service also spent $14.6 million on range improvement activities in fiscal year 2004. These funds are revenues from grazing fees charged in 2003 and deposited as receipts in the agencies’ range improvement accounts. The agencies use the funds to pay for direct and indirect activities related to range improvement projects that include constructing fences, developing water sources such as tanks or impoundments, and seeding to improve vegetation and forage amounts. The expenditure of funds on these assets represents an investment in infrastructure assets that are the property of the United States. Under federal financial management standards, both BLM and the Forest Service are working to identify the value of these assets, which is currently unknown. In fiscal year 2004, the National Park Service, U.S. Fish and Wildlife Service, Reclamation, DOE, and the DOD services spent at least $3.4 million on their grazing programs, as shown in table 4. Because it arranges with BLM to manage its grazing program, DOE incurs only incidental expenditures related to grazing. Because the agencies use grazing as a tool to support other management goals, they do not specifically track grazing, and hence do not track direct or indirect grazing expenditures. For this reason, the expenditures are the best estimates of individuals who manage the grazing programs. The field managers for these eight agencies identified the following activities associated with grazing on federal lands: fence installation and repair, cattle troughs, cattle guard installation, fertilizer, personnel, security, monitoring and inspections, control of invasive species and noxious weeds, and managing grazing leases. Generally, the estimates are low because they do not include all expenditures—including indirect expenditures—and several offices did not provide estimates. In addition to the 10 federal agencies’ expenditures, other federal agencies estimated that they spent $8.4 million on activities that are related to grazing on federal lands. Agencies that have grazing-related activities include the following: several USDA agencies that provide research, insurance, resource management, and other agricultural services to farmers and ranchers on both federal and private lands; Justice, the Interior’s Office of the Solicitor, and USDA’s Office of General Counsel, which perform legal services for BLM and the Forest Service; the National Oceanic and Atmospheric Administration’s National Marine Fisheries Service (NMFS) and the U.S. Fish and Wildlife Service, which consult with agencies on threatened and endangered species; the U.S. Geological Survey (USGS), which provides research on resource conditions on rangelands; and the Environmental Protection Agency, which provides grants to improve watersheds that may include areas with resources degraded by grazing. The agencies estimated, when possible, the share of their fiscal year 2004 expenditures for grazing-related activities on federal lands, as shown in table 5. Agricultural services. As the table shows, in fiscal year 2004, the largest amount of identified expenditures for grazing-related activities went to agricultural services provided by USDA. The Animal and Plant Health Inspection Service spent most of these funds to control nuisance species and insects, such as Mormon crickets and grasshoppers, that affect forage on federal lands. Not all the agencies identified as having programs that might be used by ranchers with federal permits and leases could separate out the funds they spent on public lands. For example, the Natural Resources Conservation Service helps ranchers manage their soil, water, and vegetation to prevent the resources from becoming degraded; however, because the agency focuses on ranchers, it cannot distinguish the work that it performs on private land from work on federal lands. Legal services. Justice attorneys represent the United States in cases that go to court or settlement, while Interior’s Office of the Solicitor and USDA’s Office of General Counsel provide legal advice to the agencies. In addition to these expenditures, BLM and Forest Service staff provide support work for litigation in the form of copying and preparing administrative files and documents, but these expenditures are not tracked separately from the agencies’ other work. Legal services would include any payment of attorney fees; however, none were paid in fiscal year 2004. Attorney fees are usually paid by agencies, but in some cases would be paid from the Department of Treasury’s Judgment Fund. Consultations. The federal agencies with grazing programs must consult, in some cases, with the U.S. Fish and Wildlife Service and NMFS to determine if their grazing programs pose any problem for threatened and endangered species. The U.S. Fish and Wildlife Service consults with the agencies on the potential effects to terrestrial animals and freshwater species, while NMFS consults with the agencies on the potential effects to anadromous fish—that is, fish that live in both fresh and saltwater. Research. USGS has four centers that conduct research on the effects of grazing on plant communities, including invasive plants; runoff and erosion, and other hydrologic and soil conditions; select species or species groups, including sage grouse, amphibians, grassland birds, and bats; and ecosystem health, including riparian areas. The agency works with federal land management agencies on these and related issues to inform management actions and plans and to design and implement rangeland monitoring and inventories. The Forest Service’s Rocky Mountain and Pacific Northwest research stations conduct integrated studies of the effects of livestock grazing on lands and resources and assist national forests and grasslands by providing them this information. Finally, USDA’s Agricultural Research Service has more than 100 laboratories in almost every state. The agency conducts research on ecosystems and sustainable management, plant resources, forage management, livestock management, and management of pests and weeds. Because the agency’s work benefits both the livestock industry and public lands, the Agricultural Research Service cannot estimate its expenditures related to grazing on federally managed lands. Environmental Protection Agency. The agency provides grants to states to improve watersheds and water quality that has been impaired by nonpoint sources of pollution, such as agricultural runoff. States use the funds to develop projects to remove or decrease sources of pollution. For example, New Mexico received funds to improve the Chama River and its tributaries, and the Santa Fe National Forest participated by conducting different vegetation and livestock management activities, such as fencing riparian areas, developing alternative water sources in areas away from the river, and ensuring the rotation of livestock into different pastures away from the river. However, because many grazing areas include both federal and nonfederal lands and because states are not required to track what type of land is involved in a project, Environmental Protection Agency officials stated that they cannot identify the funds that are spent on federal lands that have been grazed. The 10 federal agencies collected a total of about $21 million from fees charged for their grazing permits and leases in fiscal year 2004—less than one-sixth of the expenditures needed to manage grazing; the largest amount of funds, $17.5 million, was collected by BLM and the Forest Service. From the total amount, the agencies distributed almost $5.7 million to states and counties, deposited almost $3.8 million in the Treasury as miscellaneous receipts, and deposited at least $11.7 million to separate Treasury accounts to be further appropriated or used by the agencies for their various programs. In addition, the DOD services received payment in-kind valued at almost $1.4 million to offset grazing fees, and Reclamation and the U.S. Fish and Wildlife Service also received in-kind services. Reclamation received services valued at about $1,100, and the U.S. Fish and Wildlife Service received services of unknown value. The distribution of funds depends on the agencies’ different authorities. BLM and the Forest Service collected about $17.5 million, or 83 percent, of all grazing receipts federal agencies collected in fiscal year 2004. As shown in table 6, depending on the authorities under which the receipts were raised, the funds were distributed to the states, deposited into the general fund of the Treasury, and deposited into special accounts in the Treasury for further appropriation and agency use, including use for range improvement. Under FLPMA, 50 percent or $10 million, whichever is greater, of fees collected in a year for grazing on BLM lands managed under the Taylor Grazing Act and the Act of August 28, 1937, and on Forest Service land in the 16 western states, are to be credited to a special fund receipt account in the Treasury for range rehabilitation, protection, and improvements, called the range improvement fund. Half of this account is authorized to be appropriated for use in the district, region, or national forest from which it was generated, and the remaining half is to be used for range rehabilitation, protection, and improvement as the Secretary directs. According to agency officials, the agencies distribute 50 percent of the actual grazing receipts from their individual grazing accounts to their respective range improvement funds. As table 6 shows, in fiscal year 2004, BLM distributed about $5.9 million to its range improvement fund, and the Forest Service distributed about $2.9 million to its range improvement fund, for a total of about $8.8 million. BLM distributes grazing fees from four accounts, according to where the funds were collected—within or outside a grazing district or from grasslands. It also deposits certain mineral receipts into its range improvement fund; in fiscal year 2004, it deposited $1.2 million in mineral receipts. The Forest Service deposits receipts and distributes funds from its National Forest Fund that also contains receipts for other activities on forest lands such as timber harvest. In addition to the receipts distributed to range improvement—under the Taylor Grazing Act, the Act of August 28, 1937, and the Bankhead-Jones Farm Tenant Act—BLM also distributes receipts from the four accounts to states and the Treasury, according to whether the fees were collected within or outside a grazing district or from grasslands. For lands within grazing districts—those lands on which grazing is permitted—BLM distributes 12.5 percent of receipts to the states in which the grazing districts are situated and deposits the remaining receipts in the Treasury as miscellaneous receipts. For lands outside of grazing districts—those lands that are leased—BLM distributes 50 percent of the receipts to the states and does not return any funds to the Treasury as miscellaneous receipts. For grasslands, BLM distributes 50 percent of receipts to the range improvement fund, 25 percent to states, and 25 percent to the Treasury as miscellaneous receipts. The states are to distribute the funds to the counties in which the lands are permitted or leased for school or road purposes. In 2004, the agency distributed more than $2.2 million to the states and counties and deposited more than $3.7 million in the Treasury. Under the Act of May 23, 1983, the Forest Service distributes 25 percent of all of its receipts—timber, recreation, grazing, and others—to states for schools and roads. Alternatively, the states can receive funds under the Secure Rural Schools and Community Self-Determination Act of 2000. This act sought to stabilize payments to states in which shared revenues from the federal lands, such as from timber, were dwindling. The act allows some counties and states to choose a payment equal to the average of the three highest payments for Forest Service receipts during a particular eligibility period. As a result, the Forest Service makes a mix of payments, depending on what each county has chosen. In 2004, the Forest Service estimated that it distributed more than $2.6 million in grazing receipts to the states and counties; because the Forest Service deposits many types of receipts into the Treasury, it was unable to estimate the amount of grazing funds deposited in the Treasury as miscellaneous receipts. Grazing receipts collected by the National Park Service, U.S. Fish and Wildlife Service, Reclamation, and the DOD services totaled more than $3.7 million in fiscal year 2004, with the U.S. Fish and Wildlife Service generating the largest amount, more than $1.0 million. In addition, the agency received services in-kind of an unknown value. Under the interagency agreement between DOE and BLM, BLM retains grazing fees collected at DOE’s Idaho National Laboratory. The DOD services—which combined received a total of more than $2.0 million from fees—also received almost $1.4 million in payments in-kind that offset grazing fees. The agencies have different authorities for distributing the receipts collected from use of their lands. Table 7 shows the results of the distribution in fiscal year 2004. Of the $3.7 million in total receipts, more than $855,000 was distributed—by three of the eight agencies—to the states or counties in which the receipts were collected in fiscal year 2004. Two agencies deposited about $65,200 in the general fund of the Treasury as miscellaneous receipts, and each of the agencies deposited varying portions of the receipts for their programs. National Park Service. The National Park Service has the authority to recover its costs of providing services associated with its special-use expenditures. These reimbursements are to be credited to the current appropriation. Under National Park Service guidance, each national park retains funds to reimburse its expenditures for managing grazing and is responsible for calculating the amount of funding that it can recover. In fiscal year 2004, the parks retained about 98 percent of their grazing receipts and distributed about 1 percent to the Treasury. Two parks—Blue Ridge National Parkway and Point Reyes National Seashore—gathered 75 percent, or about $146,000, of the total receipts. In addition to the amounts retained by the parks, the City of Rocks National Reserve in Idaho distributed about $800 to the state in fiscal year 2004 under a cost-sharing arrangement. U.S. Fish and Wildlife Service. Under the Refuge Revenue Sharing Act of 1935, as amended, the U.S. Fish and Wildlife Service deposits grazing receipts—as well as receipts it gathers for other uses of its lands—into a separate Treasury account called the National Wildlife Refuge Fund. The funds deposited remain available until expended, without further appropriation, and the Secretary may pay necessary expenditures incurred by the U.S. Fish and Wildlife Service from the account. The act also requires the agency to make payments to counties to offset tax losses for the purchase of fee title lands, based on a formula contained in the law that entitles counties to the greater of three amounts: (1) $0.75 multiplied by the total acres of fee title land in the county; (2) three-quarters of 1 percent of the fair market value of the fee title land in that county; or (3) 25 percent of the net receipts collected by the agency at that unit. The Secretary is also required to pay 25 percent of the net receipts collected on lands reserved from the public domain. In practice, the agency retains a portion of all receipts from its lands to pay for various administrative and refuge expenditures and provides the remainder to the counties. In fiscal year 2004, the agency collected more than $6 million in receipts for all permitted uses on its lands; and about 16 percent of the receipts were grazing receipts. After the agency retained $3.1 million for its use, it had about $3.5 million to pay to the counties. Because grazing receipts collected in fiscal year 2004 represented about 16 percent of total receipts, we estimate that the U.S. Fish and Wildlife Service retained about $488,000 for its refuge system administration and distributed about $541,000 to counties. Reclamation. Reclamation credits revenues generated from grazing leases in a number of different ways. For example, under specific project authorizations, Reclamation retains receipts to repay projects or deposits funds to be appropriated for future projects. Under Reclamation’s agreements with the agencies that manage leases on its land, grazing fees will be deposited into a Treasury account. When authorized by Reclamation, the fees may remain with the managing agency to serve as reimbursement. In fiscal year 2004, of the total amount collected for grazing on Reclamation land, about $303,300 came from grazing leases that Reclamation manages and about $173,300 came from leases managed by other agencies; the agency also received about $1,100 in services in-kind to offset fees. Reclamation deposited about $188,000 in the Reclamation Fund in the Treasury and retained about $279,200 to repay projects (agency officials could not explain into which of these accounts the remaining $9,400 was deposited). The other agencies that manage grazing leases on Reclamation land kept about $108,500 in grazing receipts. DOD. The Army, Air Force, and Navy do not return grazing receipts to the states or the Treasury, while the Corps is required to deposit all of its receipts—for recreation, grazing, or other leases of lands surrounding its water projects—in the Treasury; the Secretary of the Treasury is then required to return 75 percent of these receipts to the states in which the lands are located. The Army, Corps, Air Force, and Navy are authorized to retain and spend funds to cover the administrative expenses of their grazing programs and to cover the financing of multiple land use management programs at any of their installations. The Corps district offices began retaining and managing 10 percent of their receipts for administrative expenses in fiscal year 2004; agencywide, these receipts totaled almost $42,000. Under their leasing authorities, the Army, Corps, Air Force, and Navy collected more than $3.7 million in receipts and received payments in-kind valued at about $1.4 million to offset fees. The DOD services offset fees by allowing the lessees to work on the grazing lands to pay for a portion or all of the lease. For example, some of the grazing programs at DOD installations, projects, and bases allow the lessees to maintain fences or mow the lands, in addition to grazing, to reduce vegetation. The value of such services—and therefore the offset value—is either estimated by the staff in charge of grazing programs based on prior expenditures, prices from the Natural Resources Conservation Service, or bids submitted by the ranchers. Fees charged in 2004 by the 10 federal agencies, as well as state land agencies and private ranchers, varied widely, depending on the purpose for which the fees were established and the approach to setting the fee. On BLM and Forest Service lands in the 11 western states, the grazing fee was $1.43 per AUM, while the fees on other federal lands varied from $0.29 to over $112 per AUM. In part, the BLM and Forest Service fee, which was initially set by legislation and was extended by executive order, enables ranchers to stay in production by keeping fees low to account for conditions in the livestock market. Most other federal agencies generally charge a fee based on competitive methods or set to obtain a market price for the forage on their lands, and some of them also seek to recover expenditures for their grazing programs. Similarly, state land offices in the 17 western states and private ranchers seek market value for grazing on their lands; the state agencies charged from $1.35 to $80 per AUM, while the average price private ranchers charged ranged from $8 per AUM in Arizona and Oklahoma to $23 per AUM in Nebraska. If the BLM and the Forest Service were to charge a fee for the purpose of recovering their expenditures, they could have charged up to $7.64 per AUM and $12.26 per AUM, respectively, in 2004. If they were to charge a market-based fee, the fee could vary but would likely not equal private or state fees. The prices charged by other federal agencies, states, and private ranchers may vary because of factors, such as range productivity, services provided by the landowner, and access to land. The grazing fee BLM and the Forest Service charge in western states is based on a formula that was originally established by PRIA to, among other things, prevent economic disruption and harm to the western livestock industry; the formula expired after 7 years but was extended indefinitely by Executive Order 12548. Federal grazing fees are set using a formula to achieve multiple conflicting objectives, including achieving fair market value; recovering federal expenditures for the program; and treating different parties such as ranchers, the public, and other users of public lands equitably. As a result, the fee produced by the formula is generally lower than the fees charged by the other agencies, states, and private ranchers. Table 8 shows the fees charged by each agency, state, and private ranchers, as well as the approach to setting the fee—either a formula or a market-based approach. None of the federal or state agencies use an approach that strictly recovers their agencies’ administrative or management expenditures. As shown in table 8, the fee BLM and the Forest Service charged for the western states in 2004 was $1.43 per AUM. The fee, which is set for each upcoming grazing year (March to February), is produced by a formula that consists of a $1.23 base value, multiplied by the sum of three indexes that are calculated each year by USDA’s National Agricultural Statistics Service. These indexes are based on data collected in the agency’s livestock, prices, and cattle surveys. In effect, the fee is adjusted to reflect ranchers’ ability to pay. The $1.23 base value represents the difference between the costs of conducting ranching business on private and public lands, as computed in a 1966 study of 10,000 ranching individuals in the western states. The three indexes are the following: Forage Value Index (FVI). This index is based on the weighted average estimate of the annual rental charge for cattle on private rangelands in 11 western states. Beef Cattle Price Index (BCPI). This index is based on the weighted average selling price for beef cattle in the 11 western states. Prices Paid Index (PPI). This index includes select adjusted components from USDA’s Index of Prices Paid by Farmers for Goods and Services. The components include items such as fuel, tractors and machinery, interest, and farm wage rates. Under both PRIA and the executive order, increases and decreases in the fee are limited to 25 percent per year, and under the executive order, the fee cannot drop below $1.35 per AUM. The Forest Service’s fees for grazing on national grasslands and eastern forests differ from the fee charged in its forests in the 16 western states. The fee charged for grasslands uses a formula similar to the western grazing fee, but with a different base value that recognizes the different costs for managing national forests versus national grasslands. The fee charged for grazing in the eastern forests is based on a formula with a noncompetitively established base value adjusted by the current period’s hay price index, less the value of any range improvements required by the agency. The 2004 fee for grasslands was $1.52 per AUM, and the fee for eastern forests ranged from $2.47 per AUM in Florida to $5.04 per AUM in the northeastern states for noncompetitive permits. In addition, the Forest Service puts some permits up for competitive bidding in the eastern states. Appendix IV discusses the BLM and Forest Service fee and formula first established under PRIA in more detail, the history of the federal grazing fee, and the results of studies conducted over the years to recommend alternative approaches to charging fees. In contrast to the fee charged by BLM and the Forest Service for grazing on western lands, the National Park Service, U.S. Fish and Wildlife Service, Reclamation, and DOD services are required or directed to set fees that reflect, or come close to, market value. The agencies do not have one uniform approach to setting a grazing fee: some of the agencies, such as the Air Force and National Park Service, charge per acre; and others, such as the Corps, receive a total bid price for a pasture. To achieve a fair market value, in some instances, the agencies use a competitive bidding process that involves notifying the public of the opportunity to permit or lease a grazing pasture, the acceptance of sealed bids, and the selection of the highest bid. In other instances, the agencies conduct a market appraisal of a grazing property, or use an average prevailing rate for the local area, and set a fee based on those values. Consequently, as the following discussion shows, the prices that the agencies charge vary widely, from as low as $0.29 per AUM to more than $112.50 per AUM. National Park Service. The fees charged for grazing in fiscal year 2004 ranged from $1.35 to $7 per AUM and $1.50 to $25 per acre. National Park Service guidance directs parks to charge fair market value for special uses such as grazing, unless otherwise directed by law. The fees charged in fiscal year 2004, which were set by individual parks or park units, included some fees set at market prices and others that were negotiated or fixed. The lowest fee per AUM, $1.35, was charged by several parks, including Black Canyon of the Gunnison National Park in Colorado and Capitol Reef National Park in Utah. The highest fee per AUM, $7, was charged by Point Reyes National Seashore, in northern California. That park used an independent appraisal of its lands to establish the grazing fees. The lowest per acre fee in fiscal year 2004, $1.50 per acre, was negotiated at the Buffalo National River in Arkansas. The highest per acre fee, $25, was charged at several parks, including Minuteman Missile National Historic Site in South Dakota, which set its fee based on average local rates, and Eisenhower National Historic Site and Gettysburg National Military Park in Pennsylvania, which fixed their grazing fees, also based on average local rates. Similarly, Blue Ridge National Parkway, in Virginia and North Carolina, which accounted for just over 50 percent of total Park Service livestock grazing permits in fiscal year 2004, charged a rate of $10 per acre for each of its 212 permits. The fee was established using values in a 2002 survey that the park’s staff conducted of other National Park Service field offices that administer agricultural programs, as well as market-rate information for grazing in the vicinity of the parkway that the park staff gathered from county extension and other agricultural offices. U.S. Fish and Wildlife Service. The grazing fees charged in fiscal year 2004 were, for the most part, established using market-value prices, including prices set by USDA’s National Agricultural Statistics Service. Prices ranged from $0.29 per AUM to $34.44 per AUM; both fees were based on competitive bids for grazing permits at the Sand Lake Wetland Management District in South Dakota, where access to small sites and forage conditions can vary greatly. Under U.S. Fish and Wildlife Service regulations, refuges are to charge a fee for the grant of privileges or products taken from refuges that is commensurate with fees charged for similar privately granted privileges or products, or with local market prices. To establish the fees charged in fiscal year 2004, most refuges—particularly those in western states—issued permits at the market rate, including the USDA rate. For example, the fee charged at the refuge with the largest amount of grazing, the Charles M. Russell National Wildlife Refuge in Montana, averaged $14.76 per AUM. A few refuges did not use a market value fee but instead negotiated the grazing fee with the permittee. For example, managers at the Hutton Lake National Wildlife Refuge in Wyoming negotiated a fee of $8.80 per AUM, based on the USDA rate, less services for fencing and irrigation. Reclamation. In fiscal year 2004, the fees charged ranged from $1.27 per AUM to $56.46 per AUM. Reclamation guidance directs the agency to enter into permits and leases using competitive means when there is likely to be demand from more than one party, but permits and leases may be negotiated when it is in the best interest of the United States or if no competition is present. In fiscal year 2004, while the majority of Reclamation’s area offices set grazing fees using competitive approaches, or other approaches that establish a market price, some of the offices used fixed fees or negotiated with local ranchers to agree on a fee. For example, the Wyoming Area Office, which manages several projects in and around the state of Wyoming, used competitive bidding that opened with a minimum bid. The area office staff set the minimum bid using the average private lease rates in the state, as provided by USDA. One area office also used a discounted lease method, in which the office used an average private lease rate for the area and discounted it for factors such as multiple uses of the lands. When area offices charged fixed fees, they generally set them at historic levels. For example, the Lahontan Basin Area Office, which manages Reclamation activities in the Lahontan Basin Area in northern Nevada and eastern California, manages 56 grazing permits and leases that were inherited from local irrigation districts and charged the same fee in fiscal year 2004 as the irrigation offices charged in the past. DOE. In its agreement with DOE to manage on Idaho National Laboratory land, BLM charges its current fee for grazing on DOE lands. DOD. In fiscal year 2004, the Army, Corps, Air Force, and Navy, offered the majority of their leases as competitive bids. The bids ranged from an average of $0.82 to $112.50 per AUM. Under the laws and regulations for grazing on lands managed by the services, their lands may be leased for up to 5 years and payment for a lease is generally to be fair market value, although the payment can be made through services in-kind. The DOD services may accept less than fair market value under certain circumstances when it is determined that a public interest will be served. For example, Army officials recently negotiated a new 5-year lease for grazing at Fort Hood (in Texas) with a group of cattlemen that included some previous landowners. The Army determined that, although it had no legal obligation to continue leasing only to this group, its relationship with the neighboring ranchers helped to sustain its mission, meet its environmental stewardship responsibilities, and maintain its good relations with the community. In April 2005, the Army negotiated a grazing price that was 40 percent lower than the appraised value, pending a new appraisal that explicitly considered the unique military circumstances of grazing on the installation. The new appraisal, completed in August 2005, valued the lease at a price per animal unit that is 30 percent less than the fair market value assessed for other, similar grazing parcels to account for such unique military circumstances. See appendix V for details of federal grazing fees charged by these agencies. Fees charged by private ranchers and state land agencies are higher than the BLM and Forest Service fees because, generally, ranchers and state agencies seek to generate grazing revenues by charging a price that represents market value for that land and/or the services provided. The average fee private ranchers charged in 2004 in the 11 western states was $13.30 per AUM and $13.80 per head of livestock, which represents market value, or the price that ranchers are willing to pay and receive for privately owned grazing lands in western states. This fee is determined annually through USDA surveys of private ranchers in 17 western states and is the average price ranchers (producers) reported as being paid in their area for privately owned nonirrigated grazing land. The National Agricultural Statistics Service calculates the average for each state, as well as for the 9 Great Plains states and different combinations of western states—11 western states, 16 western states, and 17 western states. As shown in table 9, the average private grazing fee for the states ranged from $8.00 per AUM in Arizona and Oklahoma to $23.00 per AUM in Nebraska. In fiscal year 2004, state land agencies in 15 western states charged grazing fees that ranged from $1.35 per AUM in California to $80 per AUM in Montana and $0.71 per acre in New Mexico to $56 per acre in South Dakota; 2 states did not charge fees because they do not have grazing on state trust lands. As table 9 shows, most states charged more than one fee: while 4 states charged a single fee for all of their state lands, 2 states charged two fees and 9 states charged a range of fees, depending on market rates or based on counties or areas with variable vegetation. The agencies manage state trust lands to help pay for schools; the lands were set aside for this purpose when each state was created. Like the federal government, the western state agencies lease their lands for grazing, among other uses. According to Interior officials, unlike the federal government, the western state agencies have a fiduciary responsibility to obtain revenues from grazing fees on state trust lands to support schools and education systems. Of the 15 state agencies charging fees, 6 agencies used competitive methods to determine the fair market value of their lands in fiscal year 2004; 6 used appraised prices or formulas to estimate the fair market value of their lands; and 3 used only formulas that do not start with a market price. Generally, the formulas adjusted the value of private grazing lands for different factors, such as the lack of fencing or water on state lands, or the price of beef. For example, Wyoming based its grazing fee on the average of private lease rates, as estimated by the Wyoming Agricultural Statistics Service, for the previous 5 years. The rate was then adjusted to account for changing resource conditions, market demand, and industry viability, and reduced by 20 percent to reflect contributions made by the lessee. (See app. VI for a discussion of the state fees.) As we noted in our 1991 report on the BLM and Forest Service grazing fee, fees can vary depending on the purposes for which they are charged. The BLM and Forest Service fee is set in accordance with the policy of preventing economic disruption and harm to the western livestock industry. The primary purpose of the BLM and Forest Service fee is not to recover the agencies’ administrative expenses. Consequently, in fiscal year 2004, the agencies spent $132.5 million to manage their grazing programs and collected $17.5 million in receipts, leaving a gap of about $115 million. If the purpose of the fee were to recover expenditures and if each agency were to charge a fee that recovered its expenditures, BLM would have had to charge up to $7.64 per AUM, and the Forest Service would have had to charge up to $12.26 per AUM in 2004, according to our analysis of the agencies’ estimated expenditures and the number of AUMs billed (7.6 million AUMs for BLM and 6.1 million AUMs for the Forest Service). While many argue that fees for grazing on federal lands should recover the agencies’ expenditures, some grazing advocates argue that agencies’ expenses are high and reflect inefficiencies and that the fee should not encourage the agencies’ inefficient practices. The primary purpose of the BLM and Forest Service fee formula is also not to achieve fair market value prices. Instead, the fee was designed to reflect fees charged by private ranchers by including the forage value index, while also adjusting the value to reflect the net costs of conducting ranching business. It reflects net costs by including the beef cattle price and producer prices indexes (a measure of the change in income and production expenses). While initially, the base price used in the formula represented what Congress and economists considered fair market value, the adjustments included in the formula have resulted in a fee that has not tracked private fees. Consequently, while the fee charged by BLM and the Forest Service fluctuated up and down, it decreased overall by about 40 percent from $2.36 per AUM in 1980 for BLM and $2.41 per AUM for Forest Service to $1.43 per AUM for both agencies in 2004. Private ranching fees increased by 78 percent over the same period, from $7.53 per AUM to $13.40 per AUM. The federal fee increased to $1.79 per AUM in 2005. (See fig. 2.) If the primary purpose of the formula were to produce a fee equal to market value, the fee would likely not be the same as that charged on private or state lands for two key reasons. First, because BLM and Forest Service permits and leases are not bid competitively, the fees associated with those permits and leases are not set in the market. In lieu of a market for BLM and Forest Service grazing, the agencies could estimate the value of their lands based on comparable properties. However, it is generally recognized that private lands, which are leased at market prices, are not often comparable to public lands because the private lands have better forage and sources of water. The quality of forage and availability of water on state lands are considered more comparable to that on federal lands because the federal government granted some of its lands to various states when they entered the Union. In addition to differences in the quality of soil, forage, and water resources, private grazing fees differ from fees for public lands because private landowners often provide services that are not provided on BLM and Forest Service lands. For example, private landowners may provide daily livestock care—watering, fencing, feeding, and veterinary care—as well as maintaining fences, corrals, and water tanks. In addition, lessees of private land can themselves lease the land to other users, such as hunters, and generate revenue. Moreover, public access to private lands is limited, whereas access to federal land is generally not limited. State agencies also limit access to their lands, a factor that makes their lands less comparable to BLM and Forest Service lands for purposes of setting fees. Second, market values are difficult to use for BLM and Forest Service permits and leases because the prices ranchers have paid for their private ranches often include the capitalized value of any associated federal grazing permits and leases—called “permit value”—and advocates state that ranchers have paid full market value for the grazing permits and leases, albeit not in the form of a payment to the government. Although Interior and USDA do not recognize grazing permits and leases issued by BLM and the Forest Service as a legal property right, the real estate market realizes the value of holding these permits and leases. As a result, it is generally recognized that while the federal government does not receive a market price for its permits and leases, ranchers have paid a market price for their federal permits or leases—by paying (1) grazing fees; (2) nonfee grazing costs, including the costs of operating on federal lands, such as protecting threatened and endangered species (i.e., limiting grazing area or time); and (3) the capitalized permit value. Should the BLM and Forest Service charge a grazing fee that reflects market values, the ranchers’ investments could be reduced accordingly, which complicates the use of the market value of the permits and leases. Because of these difficulties in estimating and using market value, some grazing experts have suggested establishing a competitive bidding process for federal permits and leases, as has been done for the McGregor Range, an Air Force bombing range. BLM manages grazing on this range using competitive bidding to set prices. In 2004, BLM received fees ranging from $5.00 to $14.50 per AUM for several leases that it offered at auction. (See app. V for more details.) Experts acknowledge, however, that significant changes to the current grazing system would be needed to allow competition, with uncertain results. In particular, range experts and agency officials point out a potential increase in administrative activities and expenditures for items such as changing operators, start-up time, and law enforcement that could occur with greater BLM and Forest Service involvement in competitive bidding. In addition, some change in the preference system on BLM and Forest Service lands might need to occur to allow competitive bidding. However, some states have implemented a form of competitive bidding while retaining preference. For example, New Mexico allows ranchers with preference to meet the best offer that results from competing the lease. Finally, range experts and agency officials point out that the effect of competitive bidding on grazing receipts collected could, in fact, reduce receipts because some allotments could be less competitive than others, given their location and quality of resources. Others stated that increased competition could reduce the economic opportunities for some smaller permittees and lessees. It is difficult to identify the full cost of grazing on federal lands. Many federal agencies have their own grazing programs, but other agencies support grazing in carrying out their responsibilities. Nevertheless, an analysis of federal expenditures and receipts provided by the agencies demonstrates that BLM and the Forest Service are spending much more on grazing than they are generating in receipts. Moreover, the existence of permit value indicates that while ranchers may have paid full value for grazing privileges, the agencies have not captured these payments in their grazing fee. These shortfalls reflect legislative and executive branch policies to support local economies and ranching communities by keeping grazing fees low. BLM and the Forest Service are charging a fee that supports this purpose. The current fee for livestock grazing has not been changed significantly since it was first established a quarter century ago, largely because of controversy over the purpose of the fee and the role of grazing in contributing to ranching economies and communities and in degrading rangeland ecosystems. Although a budgetary analysis such as the one we conducted does not consider economic, environmental, or societal costs and benefits, it does demonstrate the need to periodically reexamine programs to assess their relevance and relative priority for a changing society, including how much of the program’s financing should be paid for by those who benefit most directly. Taking a hard look at existing programs and carefully considering their goals and their financing is a challenging task. However, faced with a growing and unsustainable fiscal imbalance, the government cannot accept all of its existing programs, policies, and activities as “givens.” Now, as in the 1990s, tightened federal budgets and a persistent federal deficit create the need to examine federal programs that spend more funds than they generate in receipts and to determine whether the purposes of these programs warrant increasing user fees. Although other federal agencies’ grazing programs are much smaller than BLM’s and the Forest Service’s, they demonstrate the application of competitive and market-based approaches to charging user fees for grazing programs and recovering some program expenditures. Depending on the approach taken to set and implement a grazing fee for lands managed by BLM and the Forest Service, the federal government could close the gap that exists between those programs’ grazing expenditures and receipts. But any change in the current fee may necessitate that Congress reconsider the purpose of the fee and policy trade-offs of different fees. In addition, an evaluation of the difficulties of implementing the chosen fee would need to be conducted in order to understand the consequences for the agencies’ programs and expenditures and to deal fairly with such issues as preference and permit value. We provided USDA, Commerce, DOD, DOE, Interior, and Justice with a draft of this report for review and comment. Interior and the Forest Service provided written comments (see apps. VII and VIII). DOD did not provide official written comments, but the Air Force and Army provided technical comments, which we incorporated as appropriate. DOE also did not provide official written comments but provided technical comments, which we incorporated as appropriate. Commerce and Justice did not have comments on the draft report. In its comments, Interior did not agree nor disagree with the findings in the report. In general, the department stated that the report accurately recognizes that differences in resource conditions and legal requirements can cause variations in livestock grazing fees and pointed out the difficulty in capturing the costs of grazing programs. However, Interior stated that the report did not sufficiently discuss significant indirect benefits from grazing to other BLM programs that are difficult to quantify. We do not agree with this point. We believe that the report presents the facts about BLM’s grazing program as described in many different documents BLM provided to us and as discussed in multiple meetings. Interior also provided several specific comments clarifying the text of the report. These comments and our response can be found in appendix VII. In addition to its comments on BLM’s grazing program, the department enclosed technical comments on the U.S. Fish and Wildlife Service and Reclamation programs, which we incorporated as appropriate. The Forest Service provided coordinated comments for USDA. The Forest Service neither agreed nor disagreed with the findings in the report. The agency stated that the report accurately recognizes that the Forest Service fee is set in accordance with an executive order that maintains the fee formula established in FLPMA, as amended by PRIA. Further, it stated that the report accurately recognizes that the fee is not related to the cost of Forest Service administration of the grazing program. In addition to these comments, the Farm Services Agency and the National Agricultural Statistics Service within USDA provided technical comments, which we included as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to interested congressional committees; the Secretaries of Agriculture, Commerce, Defense, Energy, and the Interior; the Attorney General of the United States; the Administrator of the Environmental Protection Agency; the Director of the Office of Management and Budget; the directors of the 17 state land agencies; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512-3841 or nazzaror@gao.gov. Contact points for our Offices of Public Affairs and Congressional Relations may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IX. We provided information on the (1) extent of livestock grazing on, and program purposes for, land managed by 10 federal agencies; (2) amount spent in fiscal year 2004 by these agencies and other federal agencies that have grazing-related activities, to manage livestock grazing on public lands; (3) total receipts collected for grazing privileges by the 10 federal agencies with grazing programs and the amounts disbursed to counties, states, or the federal government; and (4) grazing fees charged by the 10 federal agencies, western states, and private ranchers, and the reasons for any differences among the fees. We performed the majority of our work at the 10 federal agencies that have programs to allow private ranchers to graze livestock on portions of the land they manage. These agencies were the Department of the Interior’s (Interior) Bureau of Land Management (BLM), National Park Service, U.S. Fish and Wildlife Service, and Bureau of Reclamation (Reclamation); the U.S. Department of Agriculture’s (USDA) Forest Service; the Department of Defense’s (DOD) Army, Army Corps of Engineers (Corps), Air Force and Navy; and the Department of Energy (DOE). We also performed work at other federal agencies that have grazing-related activities. These agencies are Interior’s U.S. Geological Survey (USGS) and Solicitor’s Office; USDA’s Agricultural Research Service; Animal and Plant Health Inspection Service, Cooperative State Research, Education and Extension Service, Farm Service Agency, National Agricultural Statistics Service, Risk Management Agency, Natural Resources Conservation Service, and Office of General Counsel; the Environmental Protection Agency; the Department of Commerce’s National Marine Fisheries Service; and the Department of Justice. To determine the purposes of livestock grazing programs managed by the 10 federal agencies, we reviewed authorizing legislation and agency policies and regulations, and we interviewed agency headquarters and field office officials. Through our review of legislation, policies, and regulations, we determined that we would not include Alaska in our analysis because it is treated differently under grazing law. We identified field offices to visit with the goal of visiting as many agencies as possible in an efficient manner. We visited at least one field office for every agency except for the Corps, Navy, and DOE. We visited BLM field offices in Medford, Oregon, and Las Cruces, New Mexico; a Forest Service office in Santa Fe, New Mexico; the National Park Service’s Dinosaur National Monument in Colorado and Utah; U.S. Fish and Wildlife Service’s Klamath Basin Wildlife Refuge Complex in northern California and southern Oregon; Reclamation’s Albuquerque Area Office in New Mexico; Cannon Air Force Base in Clovis, New Mexico; and Fort Hood Army Installation in Killeen, Texas. To determine the extent of grazing on land managed by the agencies, we obtained agency data for 2004 on acres and animal unit months (AUM). BLM maintains a centralized Rangeland Administration System that formally tracks and reports acres and AUMs on its lands as well as on other agencies’ lands (e.g., DOE’s Idaho National Laboratory and various Reclamation locations) where it manages grazing activity on behalf of these agencies. The Forest Service uses an information system, called INFRA, to centrally track and formally report acres, head months, and AUMs. To determine if the AUM and acreage data produced by BLM’s Rangeland Administration System and Forest Service’s INFRA system were sufficiently reliable for use in this report, we interviewed system managers about the processes used to manage the data in the systems and conducted a “walk-through” of the system with these managers. In addition, we tested the completeness and accuracy of a selection of AUM and acreage data using fiscal year 2004 system reports at the BLM field and Forest Service offices. We asked field office officials to provide us their 2004 report that specifically showed, by permit or lease, the number of AUMs authorized and billed and the fee charged. We reviewed all the files at agencies with smaller grazing programs—those with up to 25 permits or leases at an office—and selected 10 percent of files at the two agencies that had large grazing programs—250 and 500 allotment files per office. We then verified that the data in the systems were the same as data in the files by tracing the data through actual permit and lease documents, bills, and receipts showing that payment had been submitted. We determined—based on these reviews and, if necessary, follow-up interviews with local managers— that the data reported were reliable for purposes of this report. Unlike BLM and the Forest Service, the National Park Service, U.S. Fish and Wildlife Service, Reclamation, and DOD do not have similar management information systems that formally track and centrally report acres and AUM data on specific livestock grazing activities. For these agencies, we collaborated with agency headquarters and field office officials to design and test a data collection instrument tailored for each agency, which we sent to field offices. To design and test the data collection instruments, we visited several agencies’ field offices and followed the same process we used at BLM and the Forest Service to sample files, review relevant documents, track AUM data, and interview local officials to verify the completeness and accuracy of data that they could submit to us. We performed this work at the Dinosaur National Monument, Klamath Basin Wildlife Refuge Complex, Reclamation’s Albuquerque Area Office, Cannon Air Force Base, and Fort Hood Army Installation. To help ensure the reliability of the data we received from the agencies, we reviewed the data to determine whether they were complete and accurate. When we found data that were missing or appeared to be inaccurate, we called appropriate agency officials to discuss, and if necessary, correct the data. Based on these reviews and appropriate follow-up interviews, we determined that the data reported were sufficiently reliable for purposes of this report. To determine the expenditures the 10 federal agencies incurred in fiscal year 2004 to manage specific livestock grazing on federal lands they manage, total receipts collected for grazing privileges by these agencies, and the amounts disbursed to counties, states, or the federal government, we obtained agency expenditure, receipt, and disbursement data for fiscal year 2004. BLM maintains an Activity Based Costing System that centrally tracks and formally reports expenditures on livestock grazing activities, the receipts that grazing generates, and amounts disbursed. BLM officials used this system to identify the amount of direct and indirect expenditures the agency incurred for livestock grazing activities. The Forest Service does not have a cost-accounting system, but rather reports expenditures for items in its budget, called budget line items. The agency used expenditure reports for these line items, in addition to its WorkPlan system (which shows the forests’ intended work plans at the beginning of a fiscal year) to estimate the amount of expenditures on grazing activities in fiscal year 2004. The Forest Service direct expenditures include expenditures from the Forest Service grazing line item, expenditures from its watershed and vegetation line item, and estimated expenditures from its General Management and other cost pools. Because the watershed and vegetation line item can be spent for all programs and not just the grazing program, the Forest Service allocated a portion of these expenditures—11 percent— using WorkPlan, which is a tool for planning and budgeting program work at the forest level. The Forest Service uses six cost pools to allocate indirect activities and expenditures: General Management, Public Communications, Ongoing Business Services, Common Services, Office of Worker’s Compensation, and Unemployment Compensation Insurance. The General Management pool and some of the activities in the Common Services pool are considered direct or support costs, rather than indirect costs. These are included as direct expenditures. To estimate expenditures from its General Management and other cost pools, the agency attributed a share of the expenditures related to the amount of grazing and related watershed and vegetation funds that were put into the fund for the fiscal year. We did not validate the data provided by the agencies’ or test their financial management and accounting systems. We did contact USDA’s and Interior’s Office of Inspector General and representatives of KPMG, a private contractor that annually audits the agencies’ financial statements, to determine if there was any reason we could not use expenditure data in this report. There were none. In addition, we reviewed the agencies’ internal controls over grazing receipts through our testing of the agencies’ grazing files and AUM data. Unlike BLM and the Forest Service, the National Park Service, U.S. Fish and Wildlife Service, Reclamation, and DOD services do not all formally track and centrally report specific livestock grazing expenditures, receipts, and disbursements. Using the same data collection instrument described above to obtain acres and AUM data from these agencies’ field units, we also requested their estimates of expenditures and receipts. In addition, we asked headquarters officials to query their financial management and accounting systems in an effort to extract specific receipt and disbursement data related to livestock grazing activities. When necessary, we conducted follow-up interviews with agency headquarters and field office officials to ensure that the data were reliable enough for use in this report. We did not validate these financial management and accounting systems. To identify livestock grazing expenditures that other federal agencies may incur to support livestock grazing, we first developed a list of agencies and activities that are conducted that are related to grazing on public lands. To develop this list, we reviewed reports about livestock grazing on public lands, interviewed BLM and Forest Service officials, and interviewed experts at the Society for Range Management, as well as the author of a recent study on the costs of the federal grazing program. We then contacted the agencies to confirm that the activities they conduct are related to grazing and are conducted on public lands; if the agencies conducted activities that support grazing on public lands, we then requested estimated expenditures for fiscal year 2004. To that end, we contacted officials at USGS; USDA’s Agricultural Research Service, Animal and Plant Health Inspection Service, Cooperative State Research, Education, and Extension Service, Farm Service Agency, National Agricultural Statistics Service, Natural Resources Conservation Service, and Risk Management Agency; and the Environmental Protection Agency. We asked these officials to estimate, if possible, the expenditures they incur in support of livestock grazing activities. To determine agency expenditures on consultations for threatened and endangered species, we requested the data from the two agencies involved, the U.S. Fish and Wildlife Service and the National Marine Fisheries Service. To determine agency expenditures for litigation related to livestock grazing we contacted the Department of Justice, Interior’s Office of the Solicitor, and USDA’s Office of General Counsel. Their representatives estimated the cost of their time devoted to livestock grazing cases in fiscal year 2004 and identified that no payments were made for attorney fees in the same period. The National Park Service, U.S. Fish and Wildlife Service, Reclamation, and DOD services reported that they were not involved in any litigation related to livestock grazing in fiscal year 2004. To determine the fees charged in 2004 by the 10 federal agencies, western states, and private ranchers and the reasons for any differences among the fees, we relied on several different sources. For the fees charged by BLM and the Forest Service, we contacted BLM and Forest Service officials, who provided us with 2004 fee and an explanation of the formula used to calculate the fee. We also discussed the formula and its components with the staff of the National Agricultural Statistics Service. We also reviewed historical studies of the formula and fees resulting from the formula. We gathered National Park Service, U.S. Fish and Wildlife Service, Reclamation, and DOD service fees using the data collection instrument described above and also gathered information on the methods used to establish the fees. For agencies that provided fee data as a per-acre price, we converted the fees to a per-AUM price by totaling the receipts and any offsets to receipts and dividing the total by the number of AUMs approved for use on that land. We reviewed agencies’ discussion of their user fees in their Chief Financial Officers’ Annual Reports, but we did not review the agencies’ compliance with the Independent Offices Appropriation Act or OMB Circular A-25, which lay out conditions under which user fees can be charged. To determine the fees that the 17 western states charged ranchers in 2004 to graze on their state lands, and the basis for their fees, we conducted telephone interviews of program officials in the 17 states using a semistructured interview format. To determine the fees private ranchers charged in 2004 to graze on their private lands, we used the results reported by USDA’s National Agricultural Statistics Service, which conducts a survey of, among other things, fees charged by private ranchers for livestock grazing on their private lands in the 17 western states. The agency’s staff calculates average fees for each state and the average fees charged in different groups of Great Plains and western states: 9 Great Plains states, 11 western states, 16 western states, and 17 western states. We also interviewed the National Agricultural Statistics Service officials about the agency’s survey methodology for gathering data on private grazing leases and the calculation of the BLM and Forest Service fee components. To identify additional factors that should be considered in evaluating federal grazing expenditures and fees, we conducted an extensive search of studies that go beyond a limited federal budgetary analysis of livestock grazing activities and attempted to identify social, environmental, and other economic costs and benefits that both advocates and opponents of grazing use to make their respective arguments. We also interviewed experts at New Mexico State University, Oregon State University, Colorado State University, and University of Montana who have conducted relevant research to obtain their views of these various livestock grazing issues, as well as issues related to fees. We conducted our work between August 2004 and July 2005 in accordance with generally accepted government auditing standards. To place the budgetary evaluation presented in this report in a larger context, this appendix briefly discusses conflicting views on key effects of federal lands grazing: local economic development, rural community and quality of life values, and rangeland ecosystems and management. The purpose of the appendix is to present the conflicting views on grazing- related issues and as such we did not verify the accuracy of the positions and statements presented by advocates and opponents of grazing. A comprehensive analysis of the effects should quantify and capture not only the budgetary expenditures and receipts discussed in this report but also the impact on regional and local economic development and the economic costs and benefits—which are often unquantified—to society. However, a comprehensive evaluation is not yet possible because, despite years of extensive research and evaluation, the exact nature of many of these effects is still unknown, unresolved, or unquantifiable. For example, opponents of grazing believe that grazing diminishes ecosystem values by reducing biodiversity and disrupting wildlife habitats, the lost value of which is borne by the nation and future generations and which the federal budget and agencies’ budgets cannot entirely capture. On the other hand, advocates of grazing believe that the government and the public benefit from livestock grazing because it reduces the federal government’s cost for land management and contributes to preserving open space, both values that the federal budget does not capture. According to grazing advocates, ranching on federal land is critical to local economies, particularly in the western states, and many small towns across the West that depend on local ranchers’ business would not survive without federal grazing. In these localities, many ranchers who rely on public lands could be driven out of ranching because, without access to public lands, their ranches would not be economically viable. In addition, studies have shown that grazing is beneficial to rural economies because it provides them with a more diverse economic base in conjunction with other compatible land uses, such as recreational activities. Advocates also note that while some economic studies indicate that grazing on federal land is of minimal economic importance, these studies only consider grazing’s dependence on public forage on an average annual basis and not on a seasonal basis. They point out that ranchers rely on forage from federal lands during certain parts of the year, particularly during the summer and fall grazing season, and that ranchers’ dependence on federal lands becomes quite important when only the grazing season is taken into account. In contrast, opponents point to studies showing that, for many of the western states, federal lands provide only a small percentage of the total forage needed to support ranchers’ herds and do not contribute significantly to local economic production and income. For example, one study that examined the reliance of ranchers on federal land in 11 western states showed that only $1 of every $2,500 of income (0.04 percent) earned in those states is directly associated with grazing on federal lands. This minimal contribution also holds steady in more grazing-dependent counties, according to this study. Out of 102 such counties analyzed, only 11 were found to have more than 1 percent of total income associated with grazing on public lands. The budgetary evaluation of grazing on public lands does not reflect the contribution of grazing to the quality of life in rural communities as well as the contribution to individual ranchers’ quality of life. Advocates point to the value of preserving the tradition and culture of rural ranching communities as an important contribution of grazing. These advocates believe that because federal land grazing at current rates provides the support ranchers need to stay in business, grazing prevents a growing trend toward urbanization and sprawl in rural areas. The development of ranch lands reduces the availability of open space for scenic pleasure and recreational opportunities, reduces wildlife habitat, and increases the infrastructure and tax burden on nearby communities. Further, federal managers point out that their support of ranchers and rural communities maintains a buffer around federal lands—for example, military lands— preventing development along these boundaries. Similarly, grazing advocates point out the importance of grazing to the quality of life for individual ranchers, which is another factor not captured by a budgetary analysis. Studies have documented the importance of quality of life (consumptive value) in ranchers’ decisions to purchase or remain in business despite economic pressures. These studies have compared the future earning potential of the land from ranching with the market values for ranches in many rural communities and found that ranchers have been willing to accept rates of return on their investment that are below market value, which indicates that the desire to own a ranch is not motivated entirely by profit, but also by the less tangible benefit on the quality of life that the rural lifestyle offers. While the contribution of ranching to the quality of life and well being of a segment of society is widely recognized, grazing opponents question the role of the government in protecting ranchers’ social or economic way of life at a cost to all taxpayers. In the opponents’ view, preserving the heritage of “western cowboys” by allowing them the use of public lands is a subsidy to the livestock industry. The opponents question the use of continuing subsidies, rather than a functioning free market, and question the choice of subsidizing one lifestyle or chosen profession over another— for example, teachers. Opponents also disagree with the argument that grazing subsidies are essential to preserving open spaces and stopping development. They point out that many factors, such as an individual rancher’s wealth and commitment to ranching as a way of life, will ultimately influence the decision to continue ranching. Population growth and demand for housing will widen the disparity in land values between grazing and development and put some ranchers—especially those facing financial pressures—in a position to sell. However, opponents note that the replacement of cows with condominiums is not a foregone result of changes in grazing policy. Subdividing and developing ranch land is primarily driven by market conditions—demand—for the land, and market conditions for subdividing the ranch lands is far from uniform across the West. For example, it would not be economically feasible to develop lands in some remote areas of the West. However, acknowledging the reality of development of the ranch lands in some geographic areas, opponents believe that subsidized grazing on public lands is neither an efficient nor an effective means of preserving open spaces. They recommend other tools, such as zoning regulations or land purchases through conservation trusts, to more effectively protect the land from urban sprawl and development. According to grazing advocates, ranchers are the principal managers of federal land, and if they cease operation, federal agencies would have to pay others to manage these lands, thereby raising budgetary costs to the government. By grazing the land, ranchers help to maintain rangeland ecosystems—particularly those east of the Rocky Mountains—that developed historically and naturally with herbivory by wild animals such as buffalo, antelope, deer, and elk. According to advocates, grazing also helps to manage weeds, including invasive plant species, and control fires by preventing excessive biomass buildup or by reducing the intensity of fires that do start—expenses that would otherwise shift to federal agencies. For example, advocates maintain that sheep grazing reduces the need to use herbicides on the range because the sheep eat noxious plants that other animals avoid. Advocates also contend that ranchers provide a valuable service to federal agencies by reporting problems on public lands, such as fires and illegal activities, and assisting in search-and-rescue operations. Furthermore, grazing advocates assert that modern rangeland management facilitates the maintenance and health of the land because ranchers understand the science behind ranching and make decisions that preserve and improve the health of the rangeland, including wildlife habitat. In general, they point to the increased number of wildlife and game animals in recent years on the lands with ranch and water developments. For example, one study has shown that biodiversity for vegetation and animals is higher on rangelands managed for grazing than on small ranches. They say that water improvements made by ranchers are the reason behind enhanced wildlife habitat and numbers and contribute to lower maintenance costs by the agencies. To the contrary, grazing opponents argue that grazing has contributed to, and increased the amount of, the federal government’s land management costs. For example, they note, by eliminating grass and low-lying vegetation in ponderosa pine forests, grazing has contributed to increased density of conifer trees and shrubs and made these forests more prone to large, intense fires that are costly to fight. Grazing opponents also note that grazing contributes to the spread of invasive species, thereby increasing agencies’ costs for managing their rangelands. For example, opponents state that livestock transport seeds; weaken and remove native plants, such as grasses; disturb the soil; and help invasive species to take hold and grow. Grazing opponents also note that grazing in general and overgrazing in particular, have harmed plants and wildlife on federal lands by exposing soils to erosion, disrupting normal wildlife behavior, and reducing biodiversity. For example, an environmental group states that grazing has contributed to the listing of 22 percent of federal threatened and endangered species. Furthermore, livestock can be detrimental to native wildlife because they can transmit diseases, compete for food, disrupt normal behavior patterns, or damage habitat. For example, because some invasive plants can better tolerate intensive grazing than most native plants, they can prosper and drive out other native plants. The U.S. Fish and Wildlife Service has argued that grazing can cause habitat degradation and disrupt normal behavior patterns of wildlife such as breeding, feeding, or sheltering. For example, livestock management practices, such as fencing rangelands, can create obstacles for many native wildlife species, limiting their movement in search of food and shelter. Similarly, livestock protection has played a large role in eliminating native predators, which are often killed by private ranchers or federal agencies to protect the livestock. Finally, the opponents note that livestock grazing is also a threat to water quality when, for example, the livestock trample stream banks, causing them to erode and increase sedimentation or spread infectious water-borne diseases to water supplies. This appendix provides detailed information on grazing permits and leases on lands managed by BLM and the Forest Service. The first section of this appendix provides information on acres available for grazing on lands the agencies manage, the AUMs approved for grazing, and the AUMs billed in fiscal year 2004 for BLM and grazing year 2004 for the Forest Service. The second section categorizes BLM and Forest Service permits and leases by size. This section provides a snapshot of the grazing that occurred on BLM and Forest Service lands in 2004. The acres of BLM and Forest Service land available for grazing each year can change, depending on the results of environmental assessments conducted on grazing allotments; and the amount of grazing that is allowed each year can change, depending on annual assessments of forage and range conditions. Both agencies measure the number of acres of their lands available for grazing by allotment each year, but the two agencies use different terms to measure the amount of grazing. BLM calls this amount “active” or “authorized,” and the Forest Service calls this amount “permitted.” Similarly, BLM refers to the amount of grazing that it bills for annually—which can vary from the amount it authorizes because of range or climate conditions—as “billed,” and the Forest Service refers to this amount of grazing as “authorized.” We use the term “AUMs Approved” to refer to the amounts of grazing authorized by BLM and permitted by the Forest Service and “AUMs Billed” to refer to the amount of grazing for which BLM billed ranchers and the amount of grazing authorized each year on Forest Service lands. Table 10 shows the acres, AUMs approved, and AUMs grazed for BLM’s field offices in fiscal year 2004. Table 11 shows the acres of grazing available, approved AUMs, and billed AUMs in grazing year 2004 for Forest Service administrative offices and grasslands. The data on acres include acres in active and vacant allotments but not allotments that have been closed that are not available for grazing. The data on AUMs include data that the Forest Service calls “head months.” Unlike BLM, the Forest Service uses two methods to tally the amount of grazing that occurs—AUMs and head months. The agency uses the term AUM to refer to the amount of forage grazed by livestock, while it uses the term head months to refer to the number of livestock (head) that are grazed and that are subject to billing. We used the Forest Service head month data because they are equivalent to the BLM’s data on AUMs, but we used the term AUM to simplify the comparison with BLM and other agencies’ grazing data. Because the number of AUMs per permit or lease can vary greatly, the number of AUMs controlled by permittees or lessees also varies greatly. Tables 12 through 16 show the number of BLM and Forest Service permits and leases, and AUMs, by permit size. When considering the data, it must be noted that multiple permits or leases may be contained on a single allotment, just as one permit or lease may span multiple allotments. It must also be noted that several operators may share one permit or lease, just as one operator may possess multiple permits or leases; therefore, the number of permits and leases does not necessarily correlate to the total number of operators. Table 12 shows the size of BLM permits and leases, using approved AUMs in fiscal year 2004. The data do not include permits and leases with less than 2 AUMs. The Forest Service provided data on permit size for cattle and sheep in regions 1 through 6, those regions with lands in western states. Table 13 shows the data for cattle, which do not include horses or other livestock and do not include permits with fewer than 2 AUMs of grazing for cattle. Forest Service sheep permits are shown in table 14. For the purposes of conversion, five sheep equal 1 AUM. In addition to the sheep, an insignificant number of horses are included in the data because, in some cases, permittees may keep a horse for herding the sheep. For comparison purposes, the size of cattle and calf operations in the United States is shown in table 15. The size of beef cow operations is shown in table 16. Rangelands in the United States have been used for livestock grazing since the expansion and settlement of the western frontier. Ranchers have grazed livestock on lands managed by the Forest Service and its predecessor since the late 1890s and on lands managed by BLM and its predecessor since 1934. Historically, BLM and Forest Service fees were established to achieve different objectives—either to recover administrative expenses or to reflect livestock prices, respectively—but the agencies began using the same approach to setting fees in 1969. Over the years, the agencies, as well as outside entities, have conducted numerous studies attempting to establish a grazing fee that meets the objectives of multiple parties. The current fee for BLM and the Forest Service’s 16 western states is based on a formula that estimates ranchers’ ability to pay, and was established in 1978 based on studies conducted in the 1960s and 1970s. This appendix discusses the current fee, historical fees, and key grazing studies and their findings. In 2004, the grazing fee for lands managed by BLM and the Forest Service’s 16 western states was $1.43 per AUM—or the amount of forage needed to sustain a cow and her calf for 30 days. This fee is set annually according to a formula established in the Public Rangelands Improvement Act of 1978 (PRIA) and extended indefinitely by Executive Order 12548. The formula is: Fee = $1.23 x (FVI +BCPI – PPI)/100 where $1.23 = the base value, or the difference between the costs of conducting ranching business on private lands, including any grazing fees charged, and public lands, not including grazing fees. The costs were computed in a 1966 study that included 10,000 ranching businesses in the western states. FVI = Forage Value Index, or the weighted average estimate of the annual rental charge per head per month for pasturing cattle on private rangelands in 11 western states (Arizona, California, Colorado, Idaho, Montana, New Mexico, Nevada, Oregon, Utah, Washington, and Wyoming) divided by $3.65 per head month (the private grazing land lease rate for the base period of 1964-68) and multiplied by 100. BCPI = Beef Cattle Price Index, or the weighted average annual selling price for beef cattle (excluding calves) in the 11 western states divided by $22.04 per hundredweight (the beef cattle price per hundred pounds for the base period of 1964-68) and multiplied by 100. PPI = Prices Paid Index, for selected components from USDA’s National Agricultural Statistics Service’s Index of Prices Paid by Farmers for Goods and Services, adjusted by different weights (in parentheses) to reflect livestock production costs in the western states [fuels and energy (14.5), farm and motor supplies (12.0), autos and trucks (4.5), tractors and self- propelled machinery (4.5), other machinery (12.0), building and fencing materials (14.5), interest (6.0), farm wage rates (14.0), and farm services (cash rent) (18.0)]. PRIA limited the annual increase or decrease in the resulting fee to 25 percent. It also established the fee formula for a 7-year trial period and required that the effects of the fee be evaluated at the end of that period. Although the fee formula under PRIA expired in 1986, the use of the fee formula was extended indefinitely by Executive Order 12548. The executive order requires the Secretaries of the Interior and Agriculture to establish fees according to the PRIA formula, including the 25 percent limit on increases or decreases in the fee. In addition, the order established that the fee should not be lower than $1.35 per AUM. As shown in figure 3, the formula results have been limited by the PRIA and executive order constraints, but neither the formula results nor the PRIA fee has mirrored fees charged for grazing on private lands. According to different economic studies and our evaluation of the PRIA fee structure in 1991, the fee is kept low by including the BCPI and PPI, which are factors that take into account ranchers’ “ability to pay.” Figure 4 shows the value of each PRIA component from 1979 through 2004. Table 17 shows the data used in the previous two figures for easier reading of the numbers. Grazing fees have been charged for lands managed by the Forest Service since 1906—9 years after grazing was authorized on forest reserves—and for lands now managed by BLM since 1936, 2 years after the enactment of the Taylor Grazing Act. Before 1906, livestock could graze on federally managed lands for free, and livestock operators objected to being charged. Originally, the fee charged by the Forest Service and BLM was $0.05 per AUM for cattle, but the fee increased by 1968 to $0.56 per AUM for Forest Service permits and $0.33 per AUM for BLM leases and permits. Until 1969, the approach used by the Forest Service and BLM for establishing grazing fees differed. The original Forest Service fee was based on the rental value of local, private grazing tracts, while the original BLM fee was based on the agency’s administrative expenses. Beginning in the 1920s and continuing through 1968, the Forest Service based its fee on beef and lamb prices, as determined through studies it conducted. BLM (and its predecessor) also conducted studies of its fee approach. In 1946, the year that BLM was created, one of these studies supported the use of administrative expenses as a basis for the fee. However, in 1951, BLM began increasing its fees, and in 1958, it shifted its approach to one that was similar to the Forest Service approach—that is, based on livestock prices. Throughout the 1960s, BLM charged fees that factored in livestock prices. For example, the 1958 fee increased from $0.19 per AUM to $0.22 per AUM in 1959 and 1960, and it decreased to $0.19 per AUM in 1961 and 1962, reflecting decreasing livestock prices. Since 1969, the Forest Service and BLM have used a uniform approach to establish a grazing fee. After a 1960 study conducted for the Bureau of the Budget—the predecessor of the OMB—by an interdepartmental grazing committee, the Bureau set a new fee schedule for the agencies to achieve fair market value for federal grazing permits and leases. An extensive survey in 1966 of the western livestock industry, called the Western Livestock Grazing Survey and Analysis, and a 1968 review of that survey data determined that a fair market value for federal grazing permits and leases would be $1.23 per AUM. The $1.23 per AUM value equalizes the costs of conducting business between private ranch lands and federal lands. It is based on the premise that the costs of conducting grazing activities on federal lands should be competitive and comparable to the costs on private land. Because the new fee, if imposed all at once, would have increased Forest Service fees by $0.72 per AUM and BLM fees by $0.90 per AUM, a 10-year phase period was scheduled. Before the new fee could be implemented, drought and continued debate caused several delays in the phase-in schedule, and in 1976, the Congress passed the Federal Land Policy and Management Act (FLPMA), which required the Secretaries of Agriculture and of the Interior to conduct a study to establish a fee that was equitable both to the United States and to holders of grazing permits and leases. The 1977 study, Study of Fees for Grazing Livestock on Federal Lands, written by a task force of Forest Service and BLM officials, evaluated several different formulas for setting a grazing fee. The goal was to establish a fee that achieved multiple objectives, including getting fair market value for the forage while also reflecting the value of grazing to the rancher. The fee was also to contain regular adjustments to account for changes in fair market value over time. On the basis of the 1977 study, Congress enacted PRIA with the task force’s recommended formula for a 7 year trial basis. The agencies studied the effectiveness of the formula after 7 years, as required in PRIA, and academic economists sought to establish better ways to set a fee, but the use of the formula was extended indefinitely by executive order and has remain unchanged. Two studies by the agencies, one in 1986 and its update in 1992, evaluated the components of the PRIA formula and its results. The reports identified technical issues with the formula, including the fact that the BCPI does not include prices for calves—which are produced on western lands—and does include fat cattle (cattle fattened on grain for slaughter), which are not produced on western lands. The reports also noted that the PPI does not include a cost of living component; components of farm origin (feed, feeder livestock, seed, and fertilizer); or taxes; all of which increases the weight of factors that are affected by inflation, such as fuel costs. Finally, the reports identified the need to update the base value ($1.23 per AUM) to reflect current market values rather than 1960s data. Critics of the reports stated that the agencies did not evaluate the effectiveness of the PRIA formula; disagreed with the agencies’ appraisal of private lands and fees; and identified incorrect statistical indexing, such as using inflation factors instead of a livestock-relevant factor. They also stated that the agencies failed to recognize the different costs of operating on federal and private land. According to the critics, one of these costs is the value of permits and leases, which is included in the value of privately owned ranches. The livestock industry believes that this value should be included in the calculation of the $1.23 base value (subtracted out as a cost of doing business). In 1993, in response to a perceived need to increase fees to capture the economic value of forage, another Forest Service and BLM study examined the potential for an incentive-based grazing fee. The report identified the “grazing fee dilemma” as one in which the federal government is not receiving full market value for its forage, but as one in which ranchers are paying full market value by paying (1) the fee; (2) nonfee grazing costs (including costs for operating on federal lands, i.e., complying with federal requirements like those for endangered species habitat); and (3) investments in grazing permits and leases. According to this study, the only way to determine the fair market value of federal grazing permits and leases was through competitive bidding, which would have its own set of administrative expenses. In lieu of competitive bidding, according to this study, all methods of estimating fair market value resulted in fees somewhere between $3 and $5, and the base value of the formula should be negotiated at some price in that range. The report also stated that including BCPI and PPI in the grazing formula did not improve the ability of the PRIA formula to track market prices, as anticipated in 1977, and that FVI would adequately update the grazing fee. This study and report were used to inform efforts to reform grazing regulations in 1994. In the late 1980s, agricultural economists examined livestock prices and ranch revenue—the gross income from ranching—to assess the rate of return on investments in cattle and sheep ranches. The economists found that rates of return are relatively low compared with other investments, but that land value has increased and kept ranchers financially solvent. Furthermore, the net return in the ranching industry—the value of production minus costs—is often negative. This information was used to support federal legislation to change grazing fees in 1997. The legislation proposed to change the fee to equal the 12-year average of the total gross value of production for beef, multiplied by the 12-year average of the Treasury 6-month bill “new issue” rate, divided by 12. The proposal was not enacted. This appendix illustrates the different grazing fees used by federal agencies other than BLM and the Forest Service. It describes the specific fees charged at two Air Force bases—one managed by the Air Force and the other managed by BLM—an Army base, a national monument, a national refuge, and a Reclamation project. Melrose Air Force Range, located in eastern New Mexico, is a more than 66,000 acre site used by the Air Force to train pilots. It consists of an 8,800 acre target area and 57,000 acres of land surrounding the target area that acts as a buffer. The land is divided into 13 grazing areas, each of which has fencing and a water supply provided by a system of pipelines and water tanks. The target area lands were acquired from local ranchers in the 1950s, and the remaining area was acquired in the 1980s. Because the lands were acquired from local ranchers, the Air Force granted a special waiver in March 2002 to allow noncompetitive leasing to the former owners. Air Force policy allows waivers of competition under certain conditions, including offers of first lease of land to former owners. In fiscal year 2002, when many of the range’s leases were renewed, the fee charged for grazing was $1.60 per acre of land (about $5.30 per AUM). The waiver of competition contained a condition that the lease fee was to be based on a market rate determined by real property specialists. To establish a market-based grazing fee, the Air Force real estate staff developed comparable lease information for other grazing land in the vicinity and set an equivalent price. One source used for pricing information was a local agricultural land appraiser and the other was a Web site identified by the local BLM office that contained lease rates for the state. The prices remain the same for the 5-year term of the lease, when they will be reestablished. In mid-2003 and all of 2004, Cannon Air Force Base halted grazing on Melrose Range because of drought conditions that affected much of New Mexico and the southwestern United States. The ranchers received credits for the months that their cattle did not graze. McGregor Range in southern Otero County, New Mexico, is a 694,981 acre area that contains a bombing range used by the Air Force to train pilots, who practice bombing targets within the range. The land within McGregor Range has mixed ownership and management, including 608,385 acres (87 percent) of public land managed by BLM but withdrawn from public use, 71,083 acres (about 10 percent) owned in fee title by the Army, and 17,864 acres (3 percent) managed by the Forest Service. In 1999, the Congress enacted the Military Lands Withdrawal Act, renewing the withdrawal of public lands comprising the McGregor Range for military use but requiring BLM to plan and manage use of the lands in accordance with the principles of multiple use and sustained yield required by FLPMA. While accommodating the military’s continued use of the range, BLM manages other activities on the range, including livestock grazing, habitat management, fire prevention and control, and recreation, such as hunting. BLM’s Las Cruces Field Office in New Mexico administers livestock grazing on 271,000 acres of land. The area is divided into 14 grazing units available for grazing contract, primarily for cattle. In contrast to the fee charged on other BLM and Forest Service lands, BLM manages livestock grazing permits on McGregor Range using competitive bidding to establish its grazing fee. BLM sets a minimum bid and then holds an annual public auction, where all bidders meet and openly submit their offers. As a result, in September 2004, BLM received winning bids ranging from $5.00 to $14.50 per AUM to graze cattle on designated grazing units for the 9-month grazing season ending in June 2005. BLM expects the McGregor Range grazing program to be self-sustaining through competitive bidding for grazing units. BLM staff for McGregor Range consist of one rangeland management specialist, one range technician, and one maintenance worker. Revenues from the grazing leases allow BLM employees to monitor the number of cattle on the range and manage roads, fences, corrals, and water pipelines. The livestock owners manage and provide care for the cattle, including salt, minerals, and veterinary services. According to BLM officials, additional services provided on the range by BLM result in a higher minimum bid, and BLM is able to attract higher bids compared to other livestock grazing areas. Fort Hood, located in central Texas, is a 217,000-acre Army installation, the majority of which is used for military training activities, including tank and other armored vehicle training exercises. The Army allows a certain level of grazing on about 197,000 acres of the installation, having determined that grazing would not interfere with the installation’s primary training mission. The majority of the installation’s lands were acquired from private landowners. Some of the original landowners formed a group, called the Central Texas Cattlemen’s Association, which has continued leasing the land since 1954. In 2005, upon lease renewal, the Assistant Secretary of the Army (Installations and Environment) determined to offer the group a noncompetitive lease, provided that the installation obtain a fair market value for the lease. The Corps—the Army’s leasing agent—had recommended that the lease be competitively bid, but it also acknowledged that a transition to competitive leasing may be needed. The Army determined that while it had no legal obligation to continue leasing to the group, the relationship with the neighboring ranchers contributed to the Army’s ability to sustain its mission, discharge its environmental stewardship responsibilities, and maintain its standing in the community. In 2005, the Army renegotiated a lease with the Central Texas Cattlemen’s Association, charging a price of $4.67 per AUM ($56 per animal unit, per year), plus the installation’s administrative and management expenses. The Army agreed to adjust the number of animal units based on a new forage assessment and an evaluation of training intensity and the consequent effects on forage. The Army also agreed to conduct a new appraisal that considers factors that are unique to managing grazing on a military installation, such as lack of fencing, presence of endangered species, and restricted access to the installation. Although a land appraisal was conducted in 2004 and determined the price of the new lease to be $7.83 per AUM, Army officials agreed with the Association to discount this value by 40 percent for April 1, 2005, through August 31, 2005, because the appraisal did not explicitly consider the military unique circumstances that, according to Army officials, lead to higher grazing costs on Army lands. The 40-percent figure was based on a figure used in a 1996 appraisal, although the U.S. Army Audit Agency questioned the adjustment in a 2001 audit report. The Army received a new appraisal on August 12, 2005, that has a price of $5.66 per AUM ($68 per animal unit, per year) when adjusted for military unique circumstances. It will use this new amount as the basis of the fee for the remainder of the 5-year lease period. In addition to these agreements, the Cattlemen’s Association agreed to pay $102,000 for estimated administrative expenditures owed in the new lease and agreed to reimburse actual expenditures when the Army presents evidence of actual expenditures at the end of the lease year. Army staff estimated their 2005 expenditures to be $285,000. Dinosaur National Monument, located in northwestern Colorado and northeastern Utah, was created to protect a large deposit of dinosaur fossils and later expanded to protect the river corridors of the Green and Yampa rivers. The monument, which occupies 210,000 acres of desert habitat, permits grazing on monument lands to ranchers that have historically held grazing rights. Several ranchers with grazing rights own land within the boundaries of the monument, called inholdings, while several other ranchers with grazing rights own land adjacent to the monument. In fiscal year 2004, monument staff authorized 1,794 AUMs on 67,120 acres using seven special use permits. In 2004, the monument charged $1.43 per AUM—the price for grazing on BLM lands. National Park Service regulations specific to the monument direct that the grazing fees at the monument shall be the same as those approved for the BLM. The National Park Service is statutorily authorized to recover the costs of administering special use permits; however, a monument official said that they have never charged such a fee because of the more specific regulations that determine the monument’s fee. The U.S. Fish and Wildlife Service’s Klamath Basin National Wildlife Refuge Complex is part of the wetland and lake system of the Klamath Basin of northern California and southern Oregon and provides habitat for numerous birds along the Pacific flyway during spring and fall migrations. In 1905, Reclamation began to convert wetlands in the basin into agricultural lands. The refuge complex is comprised of six refuges that were established between 1908 and 1978 to conserve wetlands as a preserve and breeding ground for birds and animals. The refuge is also managed to allow appropriate agricultural uses of land. Klamath Basin refuge managers authorize grazing on 17,046 acres of the basin to allow adjacent ranchers access to forage on refuge lands and to reduce certain grasses, thereby improving the habitat of the birds that use the refuges. In fiscal year 2004, the refuge charged different fixed amounts ranging from $5.00 to $6.55 per AUM for grazing on three federal refuges in the Klamath Basin complex–Clear Lake, Lower Klamath, and Upper Klamath. U.S. Fish and Wildlife Service regulations require that fees charged for the grant of privileges and for the sale of all products taken from refuge areas, including forage, be equivalent to the fees charged by private owners in the vicinity of the refuge. Refuge officials said that the fees were negotiated in the 1980s and have remained unchanged. However, they stated that the fees are appropriate because the refuges receive benefits from grazing for wildlife habitat and forage and permittees must meet specific limitations on their use of refuge lands. For example, in one case involving the Clear Lake National Wildlife Refuge, when water levels decrease significantly and expose Native American archaeological sites, one rancher incurs significant expenditures (e.g., temporary fencing, temporary water sources, and a herder) to keep cattle away. Fresno Reservoir, located in north-central Montana, is part of Reclamation’s Milk River Project, which provides irrigation water to about 121,000 acres of land. Reclamation acquired excess land surrounding Fresno Reservoir when it built the Fresno Dam; the reservoir was originally planned to be higher and would have flooded more land. As a result, Reclamation allows grazing on the strip of land surrounding the reservoir. The area office conducts grazing on over 24,000 acres of land near Fresno Reservoir, and allows grazing on over 27,000 acres of land managed by two irrigation districts on Reclamation land within the greater Milk River Project. Revenue from the grazing receipts goes into either the Reclamation Fund or is credited to divisions within the Milk River Project. In fiscal year 2004, the Montana Area Office charged between $8.25 and $25.10 per AUM for numerous grazing permits and leases within the Milk River Project. To establish these fees, the area office used three types of market-based methods, including competitive, limited competitive, and negotiated. For all permits and leases, the area office set a minimum bid based on the market value for permits and leases in the area, and then discounted the rate for factors such as lack of fencing on Reclamation lands. The area office then offered the majority of project permits and leases for competitive bid using a sealed bid process. For parcels with limited access, the area office limited competition to the adjacent landowners, giving them equal opportunity to bid on the permits and leases. Much of the land within the Milk River Project is surrounded by private land, and therefore the Reclamation land has limited public access. For a few permits and leases, the area office used what it called a negotiated method to establish the grazing fee. In these cases, in which only one rancher has access to a site, the area office offered each permit or lease to the rancher at the minimum bid, allowing the rancher to accept or reject the bid. As this appendix discusses, the 17 western states vary considerably in the fees charged for state lands and the methods used to set those fees. These states’ land offices manage more than 46 million acres of trust lands, of which more than 37 million acres were grazed in fiscal year 2004, bringing in grazing revenues of more than $40.7 million. Upon statehood, most western states, as well as several other states throughout the nation, received lands from the federal government to be held in trust to generate revenue for public education. The Land Ordinance of 1785 initiated a program to reserve certain lands within each western township to support public schools in that township. In 1848, the federal government doubled the lands granted to western states, and it did so again by 1910, with the accession of Utah, Arizona, and New Mexico to statehood. According to many state officials that we interviewed, many state trust lands are comparable in range condition, productivity, and land value to federal lands. For example, in some states, such as Wyoming and Oklahoma, state lands are intermingled with or adjacent to federal lands; thus the native characteristics of the lands are similar. In some cases, however, federal and state lands are not comparable. For example, in Oregon much of the federal land is forested, while much of the state land is rangeland. Generally, the states charge a fee per AUM. In fiscal year 2004, the western states charged grazing fees ranging from a low of $1.35 per AUM for some lands in California to $80 per AUM in parts of Montana. As shown in table 18, the majority of the western states use a market or market-based approach to set grazing fees. Specifically, six states (Montana, Nebraska, New Mexico, North Dakota, Oklahoma, and South Dakota) offer their leases to the highest bidder through a competitive process, and six states (Arizona, California, Colorado, Texas, Washington, and Wyoming) use market-based approaches that rely on regional market rates, land appraisals, or formulas that adjust the market price for grazing by factors that account for differences between state and private lands and livestock market conditions. Three states (Idaho, Oregon, and Utah) use formulas that do not start with a market price for private lands, but instead use either a base fee, adjusted for livestock market and other factors, or a fixed percentage of livestock production value. Two states, Nevada and Kansas, allow some grazing on lands managed by other state agencies, but they do not allow grazing on state trust lands and are therefore not included in this appendix. The states provided details about their approaches to setting grazing fees, as well as information on their lands and revenues collected. Arizona: In Arizona, the annual rental rate for grazing land is required to be the true value rental rate determined by the Arizona State Land Commissioner based on the recommendations of the grazing land valuation commission. In fiscal year 2004, the Arizona State Land Department charged $2.23 per AUM for grazing on lands that it manages. In 1996 the department appraised the true value of forage on trust land using the market and income approaches. According to Arizona officials, yearly adjustment to the appraised value is made based upon a factor that is the ratio between the 5 year new and old average prices of beef, as compiled by USDA’s National Agricultural Statistics Service. Upon renewal, if multiple applications are filed for a lease, the current lessee can match competing bids. The department manages more than 9.3 million acres of land, of which more than 8.3 million acres were allocated for grazing in fiscal year 2004. Total grazing receipts in fiscal year 2004 were about $2.2 million. California: Upon receiving an application to lease lands, the California State Lands Commission is to appraise the lands and fix the annual rent; the total amount of the rental should not be in excess of the fair market value of the lands. In fiscal year 2004, the commission charged a range of fees, from $1.35 to $12.50 per AUM, for grazing on the lands that it manages. The commission establishes the grazing fees by calculating an average rate based on the rates charged by county agriculture commissioners or assessors and agricultural extension offices. If the total grazing fee for a lease is less than $500, as is often the case, a minimum rental fee of $500 per year is applied. The commission manages about 470,000 acres of surface land, of which almost 13,000 acres were allocated for grazing in fiscal year 2004. Total grazing receipts in fiscal year 2004 were about $8,000. Colorado: The Colorado State Board of Land Commissioners is to include lease rates that will promote sound stewardship and land management practices, long-term agricultural productivity, and community stability. In 2004, the state board charged between $6.65 and $8.91 per AUM for grazing on lands that it manages, depending on the region. The state board sets grazing fees on the basis of a 2004 statewide survey of private lease rates. The grazing fee is calculated for each region based on the average rate identified by the survey, then reduced by 35 percent to account for differences, such as fencing or water, between private and state lands. Each year since 2001, the state board has determined whether the fee should be adjusted up or down by 3 percent, depending on the Beef Price Index. The state board manages about 3 million acres of state land, of which about 2.4 million acres were allocated for grazing in 2004. Total grazing receipts in fiscal year 2004 were about $4.7 million. Idaho: The Idaho State Board of Land Commissioners may lease any portion of the state land at a rental amount fixed and determined by the board. In 2004, the Idaho Department of Lands charged $5.15 per AUM for grazing on the lands that it manages. The board sets the grazing fee using a formula based on livestock market factors. The formula establishes the forage value for a given year based on four factors: the (1) forage value index for 11 western states; (2) beef cattle price index for 11 western states; (3) prices paid index for 11 western states; and (4) Idaho forage value index. The formula is then applied to a base value of $1.70, which was established in 1993 by the board. If the department receives more than one application for a lease, then it auctions the lease. The department manages about 2.4 million acres of land, of which about 1.9 million were allocated for grazing in fiscal year 2004. Total grazing receipts in fiscal year 2004 were about $1.6 million. Montana: The Trust Land Management Division of the Montana Department of Natural Resources and Conservation must lease tracts to the highest bidder unless the Board of Land Commissioners determines that the bid is not in the state’s best interest, and the board may not accept a bid that is below full market value. The division used competitive bidding to collect between $5.48 and $80.00 per AUM for grazing on the lands that it manages in fiscal year 2004. If no bids are received, then the division issues the lease or permit at the minimum rate, which was $5.48 per AUM in fiscal year 2004, set by a fee formula. The formula establishes the minimum fee by multiplying the average price per pound for beef cattle in Montana by a multiplier of 7.54. The division manages about 5.1 million acres of land, of which more than 4.2 million acres were allocated for grazing in fiscal year 2004. Total grazing receipts in fiscal year 2004 were about $5.5 million. Nebraska: In Nebraska, all school land is subject to lease at fair market rental as determined by the Board of Educational Lands and Funds. In fiscal year 2004, the board used competitive bidding to collect between $16 and $28 per AUM for grazing on the lands that it manages. The board sets minimum grazing fees by geographic area. It uses a formula that multiplies the available AUMs by private sector rates, and then adjusts the resulting per-acre rents downward to reflect fence and water improvements, which the lessees must provide. The board uses three data sources to determine private sector rates: (1) verified private sector rental contracts collected by its employees, (2) a questionnaire that the board sends to professional farm and ranch managers who have mandatory fiduciary responsibility to the landowners they represent, and (3) an annual study conducted by the University of Nebraska. The board gives the private contracts the most weight when determining the grazing fee. If more than one qualified bidder is interested in the lease, it is sold to the party bidding the highest cash bonus at auction. The board manages more than 1.4 million acres, of which about 1.2 million acres were allocated for grazing in fiscal year 2004. Total grazing receipts in fiscal year 2004 were about $10 million. New Mexico: In New Mexico, the Commissioner of Public Lands is to make rules and regulations for the control, management, disposition, lease, and sale of state lands. In fiscal year 2004, the New Mexico State Land Office charged a minimum of $4.22 per AUM for grazing on lands that it manages, and collected between $0.71 and $10.15 per acre, based on competitive bidding. Absent a competitive bid, the state land office sets an annual grazing fee using a formula that multiplies a base value of $0.0474 by the carrying capacity of the land, the acreage, and the Economic Variable Index. This index is the ratio of the value of a state land office adjustment factor for that year to the value of that same adjustment factor calculated for the base year, 1987. The state land office manages about 9 million acres, of which about 8.7 million acres were allocated for grazing in fiscal year 2004. Total grazing receipts in fiscal year 2004 were about $7.6 million. North Dakota: In North Dakota, the Board of University and School Lands is required to set the minimum rental for uncultivated and cultivated lands, which it sets for the purpose of public auction using a procedure called “the fair market value method,” which it promulgated in 1989. In fiscal year 2004, the North Dakota State Land Department collected between $1.73 and $19.69 per acre, based on competitive bidding at public auction, on grazing lands that it manages. The department accepts bids over a minimum fee that is set for each tract based on a county-by-county survey completed annually by USDA’s National Agricultural Statistics Service. The department manages about 710,000 acres, of which about 690,000 acres were allocated for grazing in fiscal year 2004. The department does not know the total revenue related to grazing collected in fiscal year 2004 because they do not separate grazing and cropland revenues. Oklahoma: In Oklahoma, rentals are required to be determined by public auction. In 2004, the Oklahoma Commissioners of the Land Office used competitive bidding to collect between $7 and $16 per AUM for grazing on lands that it manages. The land office sets a minimum grazing fee based on appraisals, and the grazing leases are then auctioned and awarded to the highest bidder. The land office manages about 745,000 acres, of which about 500,000 were allocated for grazing in 2004. The land office does not know the total revenue related to grazing collected in fiscal year 2004 because it does not separate grazing and cropland revenues. Oregon: The Oregon Department of State Lands may lease common school grazing lands subject to terms and conditions it sets or are otherwise legislated. In 2004, the department charged $4.32 per AUM for grazing on lands that it manages, using a formula that considers livestock production factors. The formula multiplies the (1) animal gain per month, fixed at 30 pounds; (2) marketable calf crop, fixed at 80 percent; (3) the state share of the calf crop, fixed at 20 percent; and (4) average statewide calf sales price for the preceding year, from USDA’s Oregon agricultural price data. This annual rental is determined by multiplying the AUM rental rate by the average annual base rate forage capacity in AUMs of each leasehold and should be at least $100. The department is currently reconsidering Oregon’s grazing fee formula and is comparing the formula with the grazing fee formulas in surrounding states. The department manages almost 1.6 million acres, of which about 640,000 acres were allocated for grazing in 2004. Total grazing receipts in fiscal year 2004 were about $300,000. South Dakota: In South Dakota, the Commissioner of School and Public Lands is to establish the minimum annual rental rate per acre, which is the rate at which bidding starts. In 2004, the South Dakota Office of School and Public Lands used competitive bidding to collect between $3 and $56 per acre on lands that it manages. The commissioner of the office sets a minimum grazing fee, $9 per AUM in 2004, using a formula that multiplies the nonweighted 5-year average price per pound of all calves sold in South Dakota by 425 pounds, the average calf weight. The number is then divided by 12 months and multiplied by a percentage set by the commissioner, 25 percent in 2004. Once the minimum fee per AUM is established, the office divides the fee by the land’s annual carrying capacity in order to establish a minimum per acre opening bid. The office manages about 770,000 acres, of which about 750,000 acres were allocated for grazing in 2004. Total grazing receipts in fiscal year 2004 were about $2.25 million. Texas: The Texas General Land Office is to award leases to the highest responsible bidder. In fiscal year 2004, the land office charged between $4.16 and $12.50 per AUM for grazing on lands that it manages. For the most part, grazing fees are based on fair market value within the region. Staff members within the land office conduct on-site evaluations of state lands to assess the value of the lands and forage as a basis for the grazing fee, taking into consideration productivity, range condition, improvements, and location, among other factors. For those state lands without public access, the grazing fees may be negotiated based on the appraised rate with the adjacent landowner. The land office manages almost 750,000 acres, of which almost 550,000 acres were allocated for grazing in fiscal year 2004. Total grazing receipts in fiscal year 2004 were about $1.2 million. Utah: The Director of the Utah School and Institutional Trust Lands Administration is required to base the grazing fee on the fair market value of the permit. In fiscal year 2004, the Utah School and Institutional Trust Lands Board of Trustees used a formula to charge $1.43 or $2.35 per AUM for grazing on lands that it manages. The board initially used the federal fee as the base rate for the state fee, but it now establishes the state fee by adjusting the previous year’s fee up or down, based on the 7-year trend of local prices for cattle, sheep, wool, and hay. The fees on state trust lands are typically about 60 to 90 cents more than the federal grazing fee: $2.25 in fiscal year 2004 plus a fee of 10 cents for weed and insect control. When a permit is up for renewal, ranchers or other interested parties, in addition to the current lessee, can submit bonus bids on the permit, but the current lessee has the right to match the bonus bid. On lands gained through land exchanges with the federal government, the federal grazing fee applies: $1.43 per AUM in fiscal year 2004. The Utah School and Institutional Trust Lands Administration is proposing that the Utah fees be increased over the next 3 to 5 years using a two-fee structure that will increase the fee to $3.80 per AUM on trust lands that are intermingled with BLM lands and to $7 per AUM on other trust lands. The board manages about 3.5 million acres of land, of which about 3 million acres were allocated for grazing in fiscal year 2004. Total grazing receipts in fiscal year 2004 were about $440,000. Washington: The Washington State Department of Natural Resources has responsibility for issuing rules for the grazing of livestock and is to charge such fees as it deems adequate and advisable. In 2004, the Washington State Department of Natural Resources charged $5.41 per AUM for range permits and $7.76 per AUM for grazing leases on lands that it manages. Range permits provide only the right to forage over a large area of land for a limited period of time each year, whereas grazing leases provide full leasehold rights, including control of the land. The fee for the range permits is set by a formula that considers several factors, including average livestock weight gain and livestock prices. The fee for the grazing leases is based on a 5-year rolling average of private fees, adjusted downward to account for higher operating costs on state lands, since the state provides no fences or other on-site services. The department manages about 3 million acres of trust lands, of which almost 850,000 acres were allocated for grazing in 2004. Total grazing receipts from range permits and grazing leases in fiscal year 2004 were almost $650,000. Wyoming: In Wyoming, the rental of any lease awarded is to be based on an economic analysis and must reflect at least the fair market value for the same or similar use of the land based upon a formula adopted by the Board of Land Commissioners. In fiscal year 2004, the Wyoming Office of State Lands and Investments charged $4.13 per AUM for grazing on lands that it manages. The grazing fee is established by a formula that multiplies the average private land lease rate per AUM for the 5 years preceding the current year, as estimated by the Wyoming Agricultural Statistics Service, by the 5-year weighted average parity ratio for beef cattle, as established by the National Agricultural Statistics Service, to adjust for changing resource conditions, market demand, and industry viability. The rate is then discounted by 20 percent to reflect lessee contributions. If the office receives an application for a lease at a higher amount, then the present lessee has the right to match the bid. The office manages about 3.6 million acres, of which about 3.5 million acres are used for grazing, including hay land. Total grazing receipts in fiscal year 2004 were almost $4.2 million. The following are GAO’s comments on the Department of the Interior’s letter dated September 6, 2005. 1. We disagree. The information in the report accurately and sufficiently reflects the information provided by BLM in many different documents and during multiple meetings with rangeland management officials regarding the benefits from the grazing program to local economies and ranchers. However, the information provided by BLM in these many meetings and documents did not refer to any indirect benefits that accrue to other BLM programs from the grazing program. While Interior’s letter states that such significant indirect benefits exist, it does not provide any detail on the nature of these benefits; and therefore, we have not made any modifications to the report. 2. We changed the text to add the definition of a water base. 3. We met with attorneys and staff from BLM and Interior’s Office of the Solicitor on August 4, 2005, and have removed the footnote to which Interior refers in its comments. 4. In this section, we are not discussing the purpose of the fee and the grazing fee formula. Rather, we are observing that the fee formula includes factors that incorporate ranchers’ ability to pay (BCPI and PPI). We agree that other factors, such as access to public lands, enable ranchers to stay in production and therefore clarified the language, accordingly. 5. We disagree that a comparison of alternative fee structures is useless. It is useful to explicitly and periodically examine the implications of different policy choices as they relate to grazing fees and to consider alternative fee options. Our discussion of the McGregor Range is in the context of a broader discussion of competitive bidding and fees on BLM and Forest Service lands. That discussion clearly and carefully recognizes the impediments to establishing such a system. In particular, we recognize that such a system would only be established if the purpose of the program and fee were different from those which currently exist. BLM provided text to clarify the mixed ownership of McGregor Range, which we included in appendix V. In addition to the contact named above, Andrea Brown, Susan Iott, Mehrzad Nadji, Tony Padilla, Lesley Rinner, Carol Herrnstadt Shulman, Pam Tumler, and Amy Webbink made significant contributions to this report. In addition, Denise Fantone, Barry Hill, Miguel Lujan, Anne Rhodes- Kline, and Jack Warner made important contributions to the methodologies used in this report. Large Grazing Permits. GAO/RCED-93-190R (Suppl.). Washington, D.C.: July 16, 1993. Large Grazing Permits. GAO/RCED-93-190R. Washington, D.C.: June 25, 1993. Rangeland Management: Profile of the Forest Service’s Grazing Allotments and Permittees. GAO/RCED-93-141FS. Washington, D.C.: April 28, 1993. Rangeland Management: BLM’s Range Improvement Project Data Base Is Incomplete and Inaccurate. GAO/RCED-93-92. Washington, D.C.: April 5, 1993. Rangeland Management: Profile of the Bureau of Land Management’s Grazing Allotments and Permits. GAO/RCED-92-213FS. Washington, D.C.: June 10, 1992. Rangeland Management: Results of Recent Work Addressing the Performance of Land Management Agencies. GAO/T-RCED-92-60. Washington, D.C.: May 12, 1992. Rangeland Management: Assessment of Nevada Consulting Firm’s Critique of Three GAO Reports. GAO/RCED-92-178R. Washington, D.C.: May 4, 1992. Grazing Fees: BLM’s Allocation of Revenues to Montana Appears Accurate. GAO/RCED-92-95. Washington, D.C.: March 11, 1992. Rangeland Management: Interior’s Monitoring Has Fallen Short of Agency Requirements. GAO/RCED-92-51. Washington, D.C.: February 24, 1992. Rangeland Management: BLM’s Hot Desert Grazing Program Merits Reconsideration. GAO/RCED-92-12. Washington, D.C.: November 26, 1991. Rangeland Management: Comparison of Rangeland Condition Reports. GAO/RCED-91-191. Washington, D.C.: July 18, 1991. Rangeland Management: Current Formula Keeps Grazing Fees Low. GAO/RCED-91-185BR. Washington, D.C.: June 11, 1991. Rangeland Management: Forest Service Not Performing Needed Monitoring of Grazing Allotments. GAO/RCED-91-148. Washington, D.C.: May 16, 1991. Rangeland Management: BLM Efforts to Prevent Unauthorized Livestock Grazing Need Strengthening. GAO/RCED-91-17. Washington, D.C.: December 7, 1990. Rangeland Management: Improvements Needed in Federal Wild Horse Program. GAO/RCED-90-110. Washington, D.C.: August 20, 1990. Management of the Public Lands by the Bureau of Land Management and the U.S. Forest Service. GAO/T-RCED-90-24. Washington, D.C.: February 6, 1990. Change in Approach Needed to Improve the Bureau of Land Management’s Oversight of Public Lands. GAO/T-RCED-89-23. Washington, D.C.: April 11, 1989. Management of Public Rangelands by the Bureau of Land Management. GAO/T-RCED-88-58. Washington, D.C.: August 2, 1988. Public Rangelands: Some Riparian Areas Restored but Widespread Improvement Will Be Slow. GAO/RCED-88-105. Washington, D.C.: June 30, 1988. Rangeland Management: More Emphasis Needed on Declining and Overstocked Grazing Allotments. GAO/RCED-88-80. Washington, D.C.: June 10, 1988. Rangeland Management: Profiles of Federal Grazing Program Permittees. GAO/RCED-86-203FS. Washington, D.C.: August 12, 1986. Rangeland Management: Grazing Lease Arrangements of Bureau of Land Management Permittees. GAO/RCED-86-168BR. Washington, D.C.: May 30, 1986. Public Rangeland Improvement—A Slow, Costly Process in Need of Alternate Funding. GAO/RCED-83-23. Washington, D.C.: October 14, 1982. User Fees: DOD Fees for Providing Information Not Current and Consistent. GAO-02-34. Washington, D.C.: October 12, 2001. Federal User Fees: Some Agencies Do Not Comply with Review Requirements. GAO/GGD-98-161. Washington, D.C.: June 30, 1998. Federal User Fees: Budgetary Treatment, Status, and Emerging Management Issues. GAO/AIMD-98-11. Washington, D.C.: December 19, 1997.
Ranchers pay a fee to graze their livestock on federal land. Grazing occurs primarily on federal land located in the western states managed by 10 federal agencies. Generally, the fee is based on animal unit months (AUM)--the amount of forage that a cow and her calf can eat in 1 month. For most federal land, the fee per AUM is established by a formula. Advocates argue that grazing uses federal land productively and that the grazing fee is fair. Opponents argue that grazing damages public resources and that grazing fees are too low. GAO was asked to determine the (1) extent of, and purposes for, grazing in fiscal year 2004 on lands 10 federal agencies manage; (2) amount federal agencies spent in fiscal year 2004 to manage grazing; (3) total grazing receipts the 10 agencies collected in fiscal year 2004 and amounts disbursed; and (4) fees charged in 2004 by the 10 agencies, western states, and ranchers, and reasons for any differences. In commenting on a draft of this report, the Department of the Interior and the Forest Service neither agreed nor disagreed with the findings. The Forest Service stated that the report accurately described the purpose of the grazing fee. The Army and Air Force and the Department of Energy provided technical comments, which we incorporated as appropriate. The departments of Commerce and of Justice responded that they did not have comments. The 10 federal agencies managed more than 22.6 million AUMs on about 235 million acres of federal lands for grazing and land management in fiscal year 2004. Of this total, the Department of the Interior's Bureau of Land Management (BLM) and the U.S. Department of Agriculture's Forest Service managed more than 98 percent of the lands used for grazing. The agencies manage their grazing programs under different authorities and for different purposes. For BLM lands and western Forest Service lands, grazing is a major program; the eight other agencies generally use grazing as a tool to achieve their primary land management goals. In fiscal year 2004, federal agencies spent a total of at least $144 million. The 10 federal agencies spent at least $135.9 million, with the Forest Service and BLM accounting for the majority. Other federal agencies have grazing-related activities, such as pest control, and spent at least $8.4 million in fiscal year 2004. The 10 federal agencies' grazing fees generated about $21 million in fiscal year 2004--less than one-sixth of the expenditures to manage grazing. Of that amount, the agencies distributed about $5.7 million to states and counties in which grazing occurred, returned about $3.8 million to the Treasury, and deposited at least $11.7 million in separate Treasury accounts to help pay for agency programs, among other things. The amounts each agency distributed varied, depending on the agencies' differing authorities. Fees charged in 2004 by the 10 federal agencies, as well as state land agencies and private ranchers, vary widely. The grazing fee BLM and the Forest Service charge, which was $1.43 per AUM in 2004, is established by formula and is generally much lower than the fees charged by the other federal agencies, states, and private ranchers. The other agencies, states, and ranchers generally established fees to obtain the market value of the forage. The formula used to calculate the BLM and Forest Service grazing fee incorporates ranchers' ability to pay; therefore the current purpose of the fee is not primarily to recover the agencies' expenditures or to capture the fair market value of forage. As a result, BLM's and the Forest Service's grazing receipts fell short of their expenditures on grazing in fiscal year 2004 by almost $115 million. The BLM and Forest Service fee also decreased by 40 percent from 1980 to 2004, while grazing fees charged by private ranchers increased by 78 percent for the same period. If the purpose of the fee were to recover expenditures, BLM and the Forest Service would have had to charge $7.64 and $12.26 per AUM, respectively; alternately, if the purpose were to gain a fair market value, the agencies' fees would vary depending on the market. Differences in resources and legal requirements can cause fees to vary; however, the approaches used by other agencies could close the gap in expenditures and receipts or more closely align BLM and Forest Service fees with market prices. The purpose of the grazing fee is, ultimately, for the Congress to determine.
VA’s mission is to promote the health, welfare, and dignity of all veterans in recognition of their service to the nation by ensuring that they receive medical care, benefits, social support, and lasting memorials. Over time, the use of IT has become increasingly crucial to the department’s efforts to provide such benefits and services. For example, the department relies on its systems for medical information and records for veterans, as well as for processing benefit claims, including compensation and pension and education benefits. In reporting on VA’s IT management over the past several years, we have highlighted challenges that the department has faced in achieving its “One VA” vision, including that information systems and services were highly decentralized and that its administrations controlled a majority of the IT budget. For example, we noted that, according to an October 2005 memorandum from the former CIO to the Secretary of Veterans Affairs, the CIO had direct control over only 3 percent of the department’s IT budget and 6 percent of the department’s IT personnel. In addition, in the department’s fiscal year 2006 IT budget request, the Veterans Health Administration was identified to receive 88 percent of the requested funding, while the department was identified to receive only 4 percent. We have previously pointed out that, given the department’s large IT funding and decentralized management structure, it was crucial for the CIO to ensure that well-established and integrated processes for leading, managing, and controlling investments were followed throughout the department. Further, a contractor’s assessment of VA’s IT organizational alignment, issued in February 2005, noted the lack of control for how and when money is spent. The assessment found that project managers within the administrations were able to shift money as they wanted to build and operate individual projects. In addition, according to the assessment, the focus of department-level management was only on reporting expenditures to the Office of Management and Budget and Congress, rather than on managing these expenditures within the department. The department officially began its initiative to provide the CIO with greater authority over the department’s IT in October 2005. At that time, the Secretary of Veterans Affairs issued an executive decision memorandum that granted approval for the development of a new centralized management structure for the department. According to VA, its goals in moving to centralized management included having better overall fiscal discipline over the budget. In February 2007, the Secretary approved the department’s new management structure. In this new structure, the Assistant Secretary for Information and Technology serves as VA’s CIO and is supported by a principal deputy assistant secretary and five deputy assistant secretaries—senior leadership positions created to assist the CIO in overseeing functions such as cyber security, IT portfolio management, and systems development and operations. In April 2007, the Secretary approved a governance plan that is intended to enable the Office of Information and Technology, under the leadership of the CIO, to centralize its decision making. The plan describes the relationship between IT and departmental governance and the approach the department intends to take to enhance governance and realize more cost-effective use of IT resources and assets. The department also made permanent the transfer of its entire IT workforce under the CIO, consisting of approximately 6,000 personnel from the administrations. In June 2007, we reported on the department’s plans for realigning the management of its IT program and establishing centralized control of its IT budget within the Office of Information and Technology. We pointed out that the department’s realignment plans included elements of several factors that we identified as critical to a successful transition, but that additional actions could increase assurance that the realignment would be completed successfully. Specifically, we reported that the department had ensured commitment from its top leadership and that, among other critical actions, it was establishing a governance structure to manage resources. However, at that time, VA had not updated its strategic plan to reflect the new organization. In addition, we noted that the department had planned to take action by July 2008 to create the necessary management processes to realize a centralized IT management structure. In testimony before the House Veterans’ Affairs Committee last September, however, we pointed out that the department had not kept pace with its schedule for implementing the new management processes. As part of its IT realignment, VA has taken important steps toward a more disciplined approach to ensuring oversight of and accountability for the department’s IT budget and resources. Within the new centralized management structure, the CIO is responsible for ensuring that there are adequate controls over the department’s IT budget and for overseeing capital planning and execution. These responsibilities are consistent with the Clinger-Cohen Act of 1996, which requires federal agencies to develop processes for the selection, control, and evaluation of major systems initiatives. In this regard, the department has (1) designated organizations with specific roles and responsibilities for controlling the budget to report directly to the CIO; (2) implemented an IT governance structure that assigns budget oversight responsibilities to specific governance boards; (3) finalized an IT strategic plan to guide, manage, and implement its operations and investments; (4) completed multi-year budget guidance to improve management of its IT; and (5) initiated the implementation of critical management processes. However, while VA has taken these important steps toward establishing control of the department’s IT, it remains too early to assess their overall impact because most of the actions taken have only recently become operational or have not yet been fully implemented. Thus, their effectiveness in ensuring accountability for the resources and budget has not yet been clearly established. As one important step, two deputy assistant secretaries under the CIO have been assigned responsibility for managing and controlling different aspects of the IT budget. Specifically, the Deputy Assistant Secretary for Information Technology Enterprise Strategy, Policy, Plans, and Programs is responsible for development of the budget and the Deputy Assistant Secretary for Information Technology Resource Management is responsible for overseeing budget execution, which includes tracking actual expenditures against the budget. Initially, the deputy assistant secretaries have served as a conduit for information to be used by the governance boards. As a second step, the department has established and activated three governance boards to facilitate budget oversight and management of its investments. The Business Needs and Investment Board; the Planning, Architecture, Technology and Services Board; and the Information Technology Leadership Board have begun providing oversight to ensure that investments align with the department’s strategic plan and that business and budget requirements for ongoing and new initiatives meet user demands. One of the main functions of the boards is to designate funding according to the needs and requirements of the administrations and staff offices. Each board meets monthly, and sometimes more frequently, as the need arises during the budget development phase. The first involvement of the boards in VA’s budget process began with their participation in formulating the fiscal year 2009 budget. As part of the budget formulation process, in May 2007 the Business Needs and Investment Board conducted its first meeting in which it evaluated the list of business projects being proposed in the budget using the department’s Exhibit 300s for fiscal year 2009, and made departmentwide allocation recommendations. Then in June, these recommendations were passed on to the Planning, Architecture, Technology, and Services Board, which proposed a new structure for the fiscal year 2009 budget request. The recommended structure was to provide visibility to important initiatives and enable better communication of performance results and outcomes. In late June, based on input from the aforementioned boards, the Information Technology Leadership Board made recommendations to department decision makers for funding the major categories of IT projects. In July 2007, following its work on the fiscal year 2009 budget formulation, the boards then began monitoring fiscal year 2008 budget execution. However, according to Office of Information and Technology officials, with the governance boards’ first involvement in budget oversight having only recently begun (in May 2007), and with their activities to date being primarily focused on formulation of the fiscal year 2009 budget and execution of the fiscal year 2008 budget, none of the boards has yet been involved in all stages of the budget formulation and execution processes. Thus, they have not yet fully established their effectiveness in helping to ensure overall accountability for the department’s IT appropriations. In addition, the Office of Information and Technology has not yet standardized the criteria that the boards are to use in reviewing, selecting, and assessing investments. The criteria is planned to be completed by the end of fiscal year 2008 and to be used as part of the fiscal year 2010 budget discussions. Office of Information and Technology officials stated that, in response to operational experience with the 2009 budget formulation and 2008 budget execution, the department plans to further enhance the governance structure. For example, the Office of Information and Technology found that the boards’ responsibilities needed to be more clearly defined in the IT governance plan to avoid confusion in roles. That is, one board (the Business Needs and Investment Board) was involved in the budget formulation for fiscal year 2009, but budget formulation is also the responsibility of the Deputy Assistant Secretary for Information Technology Resource Management, who is not a member of this board. According to the Principal Deputy Assistant Secretary for Information and Technology, the department is planning to update its governance plan by September 2008 to include more specificity on the role of the governance boards in the department’s budget formulation process. Such an update could further improve the structure's effectiveness. In addition, as part of improving the governance strategy, the department has set targets by which the Planning, Architecture, Technology, and Services Board is to review and make departmentwide recommendations for VA’s portfolio of investments. These targets call for the board to review major IT projects included in the fiscal year budgets. For example, the board is expected to review 10 percent for fiscal year 2008, 50 percent for fiscal year 2009, and 100 percent for fiscal year 2011. As a third step in establishing oversight, in December 2007, VA finalized an IT strategic plan to guide, manage, and implement its operations and investments. This plan (for fiscal years 2006-2011) aligns Office of Information and Technology goals, priorities, and initiatives with the priorities of the Secretary of Veterans Affairs, as identified in the VA strategic plan for fiscal years 2006-2011. In addition, within the plan, the IT strategic goals are aligned with the CIO’s IT priorities, as well as with specific initiatives and performance measures. This alignment frames the outcomes that IT executives and managers are expected to meet when delivering services and solutions to veterans and their dependents. Further, the plan includes a performance accountability matrix that highlights the alignment of the goals, priorities, initiatives, and performance measures, and an expanded version of the matrix designates specific entities within the Office of Information and Technology who are accountable for implementation of each initiative. The matrix also establishes goals and time lines through fiscal year 2011, which should enable VA to track progress and suggest midcourse corrections and sustain progress toward the realignment. As we previously reported, it is essential to establish and track implementation goals and establish a timeline to pinpoint performance shortfalls and gaps and suggest midcourse corrections. As a fourth step, the department has completed multi-year budget guidance to improve management of its IT portfolio. In December 2007, the CIO disseminated this guidance for the fiscal years 2010 through 2012 budgets. The purpose of the guidance is to provide general direction for proposing comprehensive multi-year IT planning proposals for centralized review and action. The process called for project managers to submit standardized concept papers and other review documentation in December 2007 for review in the January to March 2008 time frame, to decide which projects will be included in the fiscal year 2010 portfolio of IT projects. The new process is to add rigor and uniformity to the department’s investment approach and allow the investments to be consistently evaluated for alignment with the department’s strategic planning and priorities and the enterprise architecture. According to VA officials, this planning approach is expected to allow for reviewing proposals across the department and for identifying opportunities to maximize investments in IT. Nevertheless, although the multi-year programming guidance holds promise for obtaining better information for portfolio management, the guidance has not been fully implemented because it is applicable to future budgets (for fiscal years 2010 through 2012). As a result, it is too early to determine VA’s effectiveness in implementing this guidance, and ultimately, its impact on the department’s IT portfolio management. Finally, the department has begun developing new management processes to establish the CIO’s control over the IT budget. The department’s December 2007 IT strategic plan identifies three processes as high priorities for establishing the foundation of the budget functions: project management, portfolio management, and service level agreements. However, while the department had originally stated that its new management processes would be implemented by July 2008, the IT strategic plan indicates that key elements of these processes are not expected to be completed until at least fiscal year 2011. Specifically, the plan states that the project and portfolio management processes are to be completed by fiscal year 2011, and does not assign a completion date for the service level agreement process. As our previous report noted, it is crucial for the CIO to ensure that well- established and integrated processes are in place for leading, managing, and controlling VA’s IT resources. The absence of such processes increases the risk to the department’s ability to achieve a solid and sustainable management structure that ensures effective IT accountability and oversight. Appendix I provides a timeline of the various actions that the department has undertaken and planned for the realignment. In summary, while the department has made progress with implementing its centralized IT management approach, effective completion of its realignment and implementation of its improved processes is essential to ensuring that VA has a solid and sustainable approach to managing its IT investments. Because most of the actions taken by VA have only recently become operational, it is too early to assess their overall impact. Until the department carries out its plans to add rigor and uniformity to its investment approach and establishes a comprehensive set of improved management processes, the department may not achieve a sustainable and effective approach to managing its IT investments. Mr. Chairman and members of the Subcommittee, this concludes my statement. I would be pleased to respond to any questions that you may have at this time. For more information about this testimony, please contact Valerie C. Melvin at (202) 512-6304 or by e-mail at melvinv@gao.gov. Key contributors to this testimony were Barbara Oliver, Assistant Director, Nancy Glover, David Hong, Scott Pettis, and J. Michael Resser.
The use of information technology (IT) is crucial to the Department of Veterans Affairs' (VA) mission to promote the health, welfare, and dignity of all veterans in recognition of their service to the nation. In this regard, the department's fiscal year 2009 budget proposal includes about $2.4 billion to support IT development, operations, and maintenance. VA has, however, experienced challenges in managing its IT projects and initiatives, including cost overruns, schedule slippages, and performance problems. In an effort to confront these challenges, the department is undertaking a realignment to centralize its IT management structure. This testimony summarizes the department's actions to realign its management structure to provide greater authority and accountability over its IT budget and resources and the impact of these actions to date. In developing this testimony, GAO reviewed previous work on the department's realignment and related budget issues, analyzed pertinent documentation, and interviewed VA officials to determine the current status and impact of the department's efforts to centralize the management of its IT budget and operations. As part of its IT realignment, VA has taken important steps toward a more disciplined approach to ensuring oversight of and accountability for the department's IT budget and resources. For example, the department's chief information officer (CIO) now has responsibility for ensuring that there are controls over the budget and for overseeing all capital planning and execution, and has designated leadership to assist in overseeing functions such as portfolio management and IT operations. In addition, the department has established and activated three governance boards to facilitate budget oversight and management of its investments. Further, VA has approved an IT strategic plan that aligns with priorities identified in the department's strategic plan and has provided multi-year budget guidance to achieve a more disciplined approach for future budget formulation and execution. While these steps are critical to establishing control of the department's IT, it remains too early to assess their overall impact because most of the actions taken have only recently become operational or have not been fully implemented. Thus, their effectiveness in ensuring accountability for the resources and budget has not yet been clearly established. For example, according to Office of Information and Technology officials, the governance boards' first involvement in budget oversight only recently began (in May 2007) with activities to date focused primarily on formulation of the fiscal year 2009 budget and on execution of the fiscal year 2008 budget. Thus, none of the boards has yet been involved in all aspects of the budget formulation and execution processes and, as a result, their ability to help ensure overall accountability for the department's IT appropriations has not yet been fully established. In addition, because the multi-year programming guidance is applicable to future budgets (for fiscal years 2010 through 2012), it is too early to determine VA's effectiveness in implementing this guidance. Further, VA is in the initial stages of developing management processes that are critical to centralizing its control over the budget. However, while the department had originally stated that the processes would be implemented by July 2008, it now indicates that implementation across the department will not be completed until at least 2011. Until VA fully institutes its oversight measures and management processes, it risks not realizing their contributions to, and impact on, improved IT oversight and accountability within the department.
The Competition in Contracting Act of 1984 (CICA), 41 U.S.C. section 253, and the implementing FAR section 6.302 require full and open competition for government contracts except in a limited number of statutorily prescribed situations. One situation in which agencies may use other than full and open competition occurs when the agency’s need is of such unusual and compelling urgency that the government would be seriously injured unless the agency is permitted to limit the number of sources from which it solicits proposals. Even when an unusual and compelling urgency exists, the agency is required to request offers from as many potential sources as is practicable under the circumstances. 41 U.S.C. section 253(e); FAR section 6.302-2(c)(2). This means that an agency may limit a procurement to one firm only when the agency reasonably believes that only that firm can perform the work in the available time. Based on our investigation, we believe there was insufficient urgency to limit competition and that the sole-source contract to Sato & Associates was not proper. The Treasury OIG violated the applicable statute and regulation by failing to request offers from as many potential sources as was practical. Ms. Lau knew of three other former IGs who had performed similar management reviews. Indeed, Mr. Sato hired two of the former IGs to assist him with the Treasury OIG review. Further, the cost of that review, over $90,700, appears artificially high. After Mr. Sato submitted a similar-costing proposal to Interior and after a full and open competition, Interior awarded a similar contract to Mr. Sato at a final cost of about $28,900. Prior to being confirmed as Treasury IG on October 7, 1994, Ms. Lau decided that a management review of the OIG would help her meet a number of challenges in her new job. In November 1994, Ms. Lau contacted Mr. Sato to request that he conduct the management review. According to Ms. Lau, she first met Mr. Sato when she was a regional official and Mr. Sato a national official of the Association of Government Accountants; a professional relationship developed over the years through functions related to that association. Mr. Sato had written to the White House Personnel Office in May 1993 recommending Ms. Lau for an appointment to an IG position. In November 1994, Ms. Lau talked with senior OIG managers about a management review and advised them that she knew to whom she wanted to award a contract. In early December 1994, she contacted Treasury’s PSD to request assistance in awarding a management review contract. The contracting officer provided her with an explanation of the requirements to justify a sole-source contract. Thereafter, Ms. Lau told PSD that she wanted Sato & Associates to do the work. The Treasury contracting officer subsequently prepared a Justification for Other Than Full and Open Competition, also known as the justification and approval (J&A) document. On December 12, 1994, PSD approved the J&A, authorizing a sole-source award to Sato & Associates. When we asked the contracting officer why she did not attempt to identify other individuals or companies that could perform the contract, she stated that Ms. Lau had told her that Mr. Sato “had unique capabilities which would preclude the award of a management studies contract to anyone else.” On January 9, 1995, Treasury’s PSD awarded a contract at the request of the Treasury OIG to Sato & Associates to perform a management study of the Treasury OIG. The contract specified that the contractor was to produce a report within 13 weeks, which was to focus on the most efficient methods of improving the organization and functioning of the operations of the OIG. Specific areas to be reviewed included office management procedures and practice, staffing, correspondence, automation, and personnel management. The contract was awarded without full and open competition on the basis of unusual and compelling urgency. The J&A for the Sato contract provided that “he Government would be injured if the Inspector General is unable to quickly assess any needs for management reform and make any required changes that would ensure that she receives the appropriate staff support for the implementation of her policies.” According to the contracting officer, when she questioned Ms. Lau about the justification for the Sato contract and whether an urgent need existed, Ms. Lau stated that she did not want to divulge too much of “the internal goings-on” in the Inspector General’s Office to the contracting officer. Ms. Lau merely assured the contracting officer that the need was urgent. “I was aware that the office had some major challenges to meet, that we needed to marshal the resources to do the financial audits required by the Government Management Reform Act. That we had some major work to do in terms of identifying the resources to do so. In addition, as the newly appointed head of the Office of Inspector General, I had a 120 day period before I would be able to make any major changes or reassignments of senior executives, and that I wanted to do that as early as possible. I knew I was going into an office with some issues that were getting scrutiny from Congress as well as others. I believed that I needed to have a trusted and experienced group of professionals come in to assist me to do that. I definitely felt that there was a compelling and urgent need, if you want to use that terminology, because I wanted to ensure that I had, for example, some of the major changes that were necessary to meet the CFO audit by the time the next cycle came around, which in Government fiscal years, the cycle ends September 30th, and so the financial audits that would be required under that would have to be planned and conducted within that time frame.” Other than full and open competition is permitted when the agency has an unusual and compelling urgency such that full competition would seriously injure the government’s interest. We recognize that the challenges Ms. Lau believed she faced and her express desire to make management changes and develop strategies to deal with various audit requirements as soon as possible after taking office, provide some support for the OIG’s urgency determination. On the other hand, we are not aware of facts establishing that Ms. Lau’s ability to perform her duties would have been seriously impaired had the procurement of a consultant to perform the management study been delayed by a few months in order to obtain full and open competition. On balance, we believe that there was insufficient urgency to limit competition. It is clear, however, that irrespective of whether it would have been proper to limit competition, issuance of a sole-source contract to Sato & Associates was not proper. As discussed above, unusual and compelling urgency does not relieve an agency from the obligation to seek competition. An agency is required to request offers from as many potential sources as is practicable under the circumstances. It may limit the procurement to only one firm if it reasonably believes that only that firm can perform the work in the available time. 41 U.S.C. section 253(c)(1). The J&A stated that Sato & Associates had a predominate capability to meet the Department of Treasury’s needs. However, Ms. Lau stated to us that she knew at the time that former Inspectors General Charles Dempsey, Brian Hyland, and Richard Kusserow had been awarded contracts for management reviews. We interviewed two of the three former Inspectors General—Messrs. Dempsey and Hyland—that Ms. Lau knew had done management reviews. Both stated that they could have met the IG’s urgent time frame to perform the contract. In fact, they were hired by Mr. Sato to work on the Treasury OIG contract, performing as consultants. We are aware of no reason why it was impractical for the agency to have requested offers from at least the three other known sources for the work Ms. Lau needed. Nor are we aware of any reason why Sato & Associates was the only firm that could have performed that work in the available time. In fact, Mr. Sato reported to us that he had never performed a management review, while, as Ms. Lau knew, Messrs. Dempsey, Hyland, and Kusserow had done so. Consequently, we conclude that the agency acted in violation of 41 U.S.C. section 253(e) and FAR section 6.302-2(c)(2) by failing to request offers from other potential sources. The contract to Sato & Associates was awarded at a firm fixed price of $88,566, which included estimated travel and per diem costs of $15,296. The contract also contained an unpriced time-and-materials option to assist in implementing recommendations made in the contract’s final report. A second modification to the contract exercised that option and raised the projected cost an estimated $24,760, for a total estimated contract cost of $113,326. The actual amount billed to the government by Mr. Sato for the fixed-price contract and the time-and-materials option totaled $90,776. Federal procurement policy seeks to ensure that the government pays fair and reasonable prices for the supplies and services procured by relying on the competitive marketplace wherever practical. We believe that the lack of competition for the award of the Treasury OIG management study may have been the reason for an artificially high price on the Sato & Associates contract. On February 25, 1995, Mr. Sato submitted an unsolicited proposal for $91,012 to the Department of the Interior’s OIG for a contract similar to his Treasury contract. Rather than award a contract to Mr. Sato based on this proposal, the Department of the Interior conducted a full and open competition. In June 1995, Interior awarded a management study contract to Sato & Associates for approximately $62,000 less than the offer in Mr. Sato’s unsolicited proposal. The contract’s final cost was $28,920. Our review of both management study contracts shows that they are similar and that any dissimilarity does not explain a nearly threefold higher cost of the Treasury contract over the Interior contract. The Treasury and Interior contracts contained three identical objectives that the contractor was to focus on in conducting the review and making recommendations. They were to “a. improve the day-to-day management of the Office of Inspector General “b. optimize management techniques and approaches “c. enhance the efficiency . . . productivity of the . . . [OIG].” The proposals and final reports submitted by the contractor were substantially the same for both jobs. Mr. Sato’s final report for Treasury included 30 recommendations; his Interior report had 26 recommendations. Eighteen of the recommendations in both reports were substantially the same. Messrs. Dempsey and Hyland worked with Mr. Sato on both the Interior OIG and Treasury OIG contracts. Mr. Hyland stated to us that the scope of work on the Interior contract was basically the same as that on the Treasury contract. According to Mr. Dempsey, although he conducted more interviews at Treasury than at Interior, the Treasury contract was worth no more than $40,000, adding that he and Mr. Hyland could have done “this job in 60 days at $40,000.” Ms. Lau told us that prior to her October 1994 confirmation she had learned that OIG suffered severe morale and diversity problems. In the spring of 1995, she requested OPM to conduct a workplace effectiveness study of the OIG. The purpose of the resulting OPM report was to provide the OIG with the necessary information on employee attitudes to assist it in its efforts to remove obstacles to workplace effectiveness. When Ms. Lau made that request, she had anticipated contracting with OPM to develop an implementation plan based on the problems identified in the initial study. However, in April 1995, OPM explained that it was unable to do any follow-on work because of reorganization and downsizing. Instead, in July 1995, OPM provided Treasury OIG a list of 12 consultants who were capable of doing the follow-on work. On July 12, 1995, Ms. Lau’s staff gave her a list of 14 possible consultants to perform the follow-on work—OPM’s list of 12 and 2 others with whom the staff were familiar. Ms. Lau reviewed the list, added two names, and instructed her special assistant to invite bids from at least the six individuals she had identified on the list. On August 17, 1995, OPM conducted a preliminary briefing with senior OIG staff concerning the nature of the OIG problems. Thereafter Ms. Lau told PSD that an urgent need existed to hire a contractor to perform the follow-on work. She wanted the contract awarded before the annual OIG managers’ meeting scheduled for September 14, 1995, to prove to her managers that she intended to fix the problems identified in the OPM study. (The final report was furnished to the OIG on September 30, 1995; it reported that OIG suffered from a lack of communication with its employees, severe diversity problems, and a lack of trust employees had toward management.) OIG staff followed up with the six consultants identified by Ms. Lau. The staff were unable to contact one consultant, and another consultant could not provide a preliminary proposal by August 30, 1995. With respect to the remaining four consultants, OIG staff met with each one to orally describe the agency’s needs and request written proposals. Following receipt of the proposals and oral presentations by the offerors, two OIG officials selected Kathie M. Libby, doing business as KLS, a consultant from OPM’s list, as the successful contractor. Although one OIG official told us that the evaluation criteria used for evaluating the proposals were based on the OPM recommendations, the other OIG official involved in the selection stated that the selection was based only on a “gut instinct” that KLS would provide a “good fit” with OIG and could do the work. Ms. Lau concurred with the selection. On September 12, 1995, a time-and-materials contract was awarded to KLS. The original term of the contract was from date of award (Sept. 12, 1995) to September 30, 1996. The contract, among other things, called for the contractor to attend the September 14, 1995, OIG conference; review and analyze the OPM survey results; and provide assistance to managers and staff on reaching the goals identified by OPM in its study. It was expected that in the beginning stages of contract performance, KLS would meet with OIG employees weekly, if not daily. Given the complexity of the issues and the desire for lasting improvements, OIG anticipated that KLS’s services would be required for as long as 1 year, although it was anticipated that the services would be on an “on-call” basis during the final stages of the contract. The agency justified limiting the competition on the basis of unusual and compelling urgency. The J&A provided as follows, “It is imperative that the services begin no later than September 11, 1995, in order to have the consultants provide a briefing to managers attending the September 14, 1995, OIG managers conference.” This determination reflected Ms. Lau’s concern that while similar management studies had been conducted in the past, historically there had been no follow-through on the studies’ recommendations. It also reflected her desire to show the OIG managers continuity between the OPM survey results and the follow-up work. To that end, the J&A noted that it was imperative that the employees view the change process to be implemented by the consultants as an on-going process rather than a series of “finger in the dike” actions. Based on the results of our investigation, we conclude that the decision to limit the competition was not reasonable. As explained previously, other than full and open competition is permitted when the agency has an unusual and compelling urgency such that full competition would seriously injure the government’s interest. The agency’s urgency determination was based upon Ms. Lau’s desire to have a management consultant provide a briefing at a management conference to be held a few days after contract award. The KLS consultants did attend the management conference, but they were present for the limited purpose of introducing themselves to the OIG staff and informing them that KLS would work with them to implement the OPM study recommendations. Little else was possible since, although OIG staff had received preliminary results from the OPM study in August 1995, Ms. Libby informed us that it was not until mid-October 1995, well after the OIG management conference, that the KLS consultants received the study results and began work on the contract. We recognize the importance of Ms. Lau’s desire for her managers to know that she intended to implement the OPM study recommendations. However, we do not believe Ms. Lau’s ability to convey that message at the management conference and to correct the problems identified in the OPM study would have been seriously impaired had the announcement of the actual consultant been delayed by a few months in order to conduct a full and open competition. Following discussion at the conference of the OPM study, Ms. Lau could have announced that the agency was going to employ a contractor with expertise in the field to perform follow-on work on the OPM study and that the acquisition process would begin as soon as practicable. The announcement of her plans, an expeditious initiation of the acquisition process, and notification of her staff about the contract award should have been sufficient to assure her employees that Ms. Lau was serious about addressing the diversity and morale problems. When first awarded, the KLS contract had an estimated level of effort of $85,850. The original term of the contract was 1 year. By November 1, 1996, four modifications had increased the contract price to $345,050 (see table 1). Modification 5 extended the contract through September 30, 1997, at no additional cost. Federal procurement law requires that an agency conduct a separate procurement when it wishes to acquire services that are beyond the scope of an existing contract. A matter exceeds the scope of the contract when it is materially different from the original contract for which the competition was held. The question of whether a material difference exists is resolved by considering such factors as the extent of any changes in the type of work, the performance period, and costs between the contract as awarded and as modified, as well as whether potential bidders reasonably would have anticipated the modification. In our view, the largest modification (Modification 4) materially deviated from the original contract’s scope of work and should have been the subject of a separate procurement action. Modification 4 increased the contract price by $148,600 and extended the contract period of performance by 6 months. About half of the work under this modification was the same type of work that had been performed under the original contract; however, the other half was beyond the contract’s scope of work and would not reasonably have been anticipated by potential bidders. It involved revising the OIG’s performance appraisal system. Although the OPM study referenced employee concerns with the OIG performance appraisal system, nothing in the contract called for the contractor to work with OIG to modify that system. Ms. Libby herself stated that Modification 4 significantly changed the original scope and contract requirements and that she was surprised competition was not held for this work. In our view, this modification was beyond the contract’s scope of work and would not have been appropriate even if the OIG could have justified its urgency determination for the original procurement. In addition to legal improprieties in the manner in which the agency awarded and assigned tasks under the contract, we found a pattern of careless management in the procurement process and in oversight of performance under the contract. We believe such careless management could have contributed to an increased cost for the work performed under the contract. Good procurement planning is essential to identifying an agency’s needs in a timely manner and contributes to ensuring that the agency receives a reasonable price for the work. Little or no procurement planning took place prior to making the award here. Although proposals were solicited to do follow-on work relating to recommendations from an OPM study on diversity and workplace morale, the OIG had not received the OPM study and had only been briefed on the preliminary findings at the time of the solicitation. The OIG therefore did not have sufficient information to adequately identify its needs and clearly articulate a set of goals for the change process to be implemented. Furthermore, OIG did not prepare a written solicitation, including a statement of work. One important purpose of a written statement of work is to communicate the government’s requirements to prospective contractors by describing its needs and establishing time frames for deliverables. The OIG instead relied upon oral communications and failed to effectively communicate with the consultants from whom it solicited proposals. Had the OIG waited until it received the OPM report, carefully analyzed OPM’s recommendations, determined what it needed, and adequately communicated these needs in a written solicitation, we believe the OIG would have received a better proposal initially, and one that may have been at a lower overall price. In this regard, Ms. Libby explained to us that the OIG had not specifically identified to her its needs and that she had misunderstood the work to be performed as explained in her initial telephone conversation with the OIG. Her proposal was based on her belief that the OIG already had management task forces or employee groups studying what changes were needed to address the issues raised in the OPM study and that KLS was to serve only in an advisory capacity to those working groups. However, soon after conducting her initial briefings, she learned that this was not the case and that the work that needed to be done was different from what she believed when she presented her proposal. As a result shortly after she began work, Ms. Libby informed OIG that more work was necessary under the contract than she had originally envisioned. This led to the first three modifications under the contract. Modification 1 was issued soon after the contract was awarded. It called for KLS to design and conduct briefings with OIG staff both in headquarters and in the field, adding $30,800 to the costs of the original contract. Modification 2 also increased the level of effort, and added $78,400 to the contract. According to a memorandum from the contracting officer, this modification was necessary because KLS’s technical proposal had suggested the establishment of one steering group whereas additional groups were needed. The modification also significantly increased the training hours to be expended by KLS. Modification 3 resulted from the need to increase the amount of “other direct costs” to allow for travel and material costs for KLS to contribute to the 1996 OIG managers’ conference. Although each of these three modifications were within the scope of work contemplated by the initial contract, this increased work was apparently necessary because OIG had not adequately determined its requirements at the beginning of the procurement process and conveyed them to KLS. Had the agency adequately planned for the procurement and identified its needs, this work could have been included in the original contract and the modifications would not have been required. Similarly, had the OIG properly analyzed the OPM recommendations, it could have determined whether revision of the performance appraisal system should have been included in the scope of the original contract or the work procured separately—thus eliminating Modification 4. Furthermore, had the OIG determined the nature of the work involved in revising the performance appraisal system, specific deliverables and time frames for revising the performance appraisal system could have been established. None of this was done in Modification 4, which merely stated that the modification was “to complete change process transition to include establishing a permanent self-sustaining advisory team, work with in-house committees on complex systems changes, and to establish procedures which will withstand changes in senior management personnel.” An OIG official told us that revision to the performance appraisal process had been on-going for 2 years and that the revisions to the system had still not been completed as of June 1997. We also identified management deficiencies in oversight of the work performed under the contract. In several instances, KLS performed and billed for work that was not included in the contract statement of work. As stated previously, pursuant to Modification 4, KLS was authorized to make revisions to the OIG performance appraisal system. However, prior to this modification, one of KLS’s employees performed this type of service, working with employee groups to address generic critical job elements and standards, rating levels, and an incentive award system to complement the performance appraisal system. Furthermore, the OIG official responsible for authorizing payment performed under the contract told us that she did not verify that any work had been performed under the contract prior to authorizing payment. She also told us that she did not determine whether documentation for hotel and transportation costs claimed by KLS had been received even though she authorized payment for these travel expenses. Allegations concerning IG Lau’s trips to California suggested that she had used these trips, at taxpayers’ expense, to visit her mother, a resident of the San Francisco Bay area. A review of Ms. Lau’s travel vouchers revealed that she had made 22 trips between September 1994 and February 1997 (30 months)—5 to California of which 3 included stops in San Francisco. During the three trips that included San Francisco, Ms. Lau took a total of 9 days off. During these 9 days, she charged no per diem or expense to Treasury. Her travel to California, including the San Francisco area, was scheduled for work-related reasons. See table 2. We conducted our investigation from May 13 to October 8, 1997, in Washington, D.C., and Seattle, Washington. We interviewed Treasury officials, including current and former OIG officials, and contractors and staff involved in the two procurements discussed in this report. We reviewed pertinent government regulations, OIG contract files, OIG contracting policies and procedures, and Interior OIG documents concerning Sato & Associates’ review of its operation. We also reviewed Ms. Lau’s financial disclosure statements, travel vouchers, and telephone logs. Finally, we reviewed prior GAO contracting decisions relevant to the subject of our investigation. As arranged with your office, unless you announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of the report to interested congressional committees; the Secretary of the Treasury; and the Inspector General, Department of the Treasury. We will also make copies available to others on request. If you have any questions concerning our investigation, please contact me or Assistant Director Barney Gomez on (202) 512-6722. Major contributors are listed in appendix I. Aldo A. Benejam, Senior Attorney Barry L. Shillito, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the award of a sole-source contract to Sato & Associates for a management study of the Department of the Treasury's Office of Inspector General (OIG) and of a consulting services contract to Kathie M. Libby, doing business as KLS, using other than full and open competition. GAO also reviewed the nature and purpose of trips to California made by Treasury Inspector General (IG) Valerie Lau since her appointment. GAO noted that: (1) shortly after her confirmation as Inspector General, Ms. Lau notified the Treasury Procurement Services Division (PSD) that she wanted Sato to perform a management review; (2) PSD awarded a sole-source management study contract to Sato on the basis of unusual and compelling urgency; (3) although Ms. Lau stated that the need to limit competition was urgent because of the need to make reassignments in the senior executive ranks and to marshal the resources needed to conduct audits, there was insufficient urgency to limit competition; (4) the price of Sato's contract for the Treasury OIG effort appears to be artificially high, in light of the fact that the firm performed a similar review of the Department of the Interior OIG for approximately $62,000 less; (5) in September 1995 PSD awarded a time-and-materials, consulting services contract to Libby to review and analyze an Office of Personnel Management (OPM) report on morale and diversity problems in the OIG office and assist OIG managers and staff concerning goals identified in the OPM study; (6) the contract was awarded on the basis of unusual and compelling urgency following limited competition; (7) the justification for limiting competition was not reasonable, since Ms. Lau could still have conveyed to her managers that the problems identified in the OPM study would be addressed and corrected those problems, had the consultant selection been delayed a few months to obtain full and open competition; (8) the largest modification made to the KLS contract was outside the scope of the contract and should have been obtained through a separate, competitive procurement; (9) GAO identified a pattern of careless management in the procurement process and in oversight of performance under the KLS contract; (10) OIG failed to fully understand and articulate its needs, resulting in a fourfold increase in the contract's total price and a 1-year extension to the period of performance; (11) OIG paid for work that was not authorized, and payments were made without verification that work had been done and without determining that travel and transportation costs documents had been received; and (12) all five of Ms. Lau's trips to California made between September 1994 and February 1997 were scheduled for work-related reasons.
Section 482 of Title 10 of the United States Code requires DOD to report quarterly to Congress on military readiness. The report is due to Congress not later than 45 days after the end of each calendar-year quarter (i.e. by May 15th, August 14th, November 14th, and February 14th of each year). Congress first mandated the report in 1996 to enhance its oversight of military readiness, requiring that DOD describe each readiness problem and deficiency, the key indicators and other relevant information related to these problems and deficiencies, and planned remedial actions. DOD submitted its first quarterly report in May 1996. Since that time, Congress has added additional reporting requirements. Specifically, in 1997, the initial reporting requirement was expanded to require DOD to include additional reporting elements in the quarterly reports. Examples of these additional reporting elements include historical and projected personnel trends, training operations tempo, and equipment availability. In 2008, an additional reporting element was added to require the inclusion of an assessment of the readiness of the National Guard. For a listing of the 26 reporting elements currently required by section 482, see table 1. Since DOD provided its first quarterly readiness report in May 1996, DOD and the services have invested significant resources in upgrading the systems they use to collect and report readiness information. As a result, the Office of the Secretary of Defense, the Joint Staff, the combatant commands, and the services have added numerous new readiness reporting capabilities such as the capacity to assess the ability of U.S. forces to meet mission requirements in specific operational plans. In addition, the services have also refined their respective service-specific metrics to enhance their ability to measure the readiness of their forces. The Quarterly Readiness Report to Congress is a classified report that includes a summary of the contents of the report and multiple classified annexes that provide the required information. The report is typically hundreds of pages long. For example, the July through September 2012 Quarterly Readiness Report to Congress totaled 443 pages and the January through March 2013 report is 497 pages long. The Office of the Under Secretary of Defense for Personnel and Readiness assembles and produces the quarterly report to Congress. To do so, it compiles information from multiple DOD organizations, including the Joint Staff and military services, and its own information such as a summary of overall readiness status and prepares a draft report. It then sends the draft report to DOD components to review it for accuracy, and coordinates any comments. Once finalized, Office of the Under Secretary of Defense for Personnel and Readiness officials provide the report to the congressional defense committees (see figure 1). We have previously examined the extent to which DOD’s quarterly readiness reports met section 482 reporting elements, and found that DOD’s reports lacked detail, or in some cases, information required by law. For example: In 1998, we reported that DOD’s quarterly readiness reports did not discuss the precise nature of identified readiness deficiencies, and information on planned remedial actions could be more complete and detailed to include specifics on timelines and funding requirements. In 2003, we reported that DOD’s quarterly reports still contained broad statements of readiness issues and remedial actions, which were not supported by detailed examples. We also identified gaps in the extent to which DOD addressed the required reporting elements. For example, DOD was not reporting on borrowed manpower, personnel morale, and training funding. In both reports, we recommended actions DOD could take to improve its readiness reporting. Since our 2003 review, DOD has made adjustments and expanded its readiness reporting to Congress in some areas. In its quarterly readiness reports that covered the period from April 2012 through March 2013, DOD addressed most of the 26 reporting elements required by section 482 but partially addressed some elements and did not address some other elements. We found that, for the areas that were addressed or partially addressed, the services submitted different amounts and types of information because the Office of the Secretary of Defense has not provided guidance on the information to be included in the quarterly reports. Further, we found that information may exist in the department on some of the reporting elements DOD did not address, but that DOD has not analyzed alternative information that it could provide to meet the required reporting elements. DOD’s four quarterly readiness reports that cover the period from April 1, 2012 through March 31, 2013 mostly addressed the 26 required reporting elements. In analyzing the three reports that covered the period from April 1 through December 31, 2012, we found that DOD addressed 17 elements, partially addressed 3 elements, and did not address 6 elements. In the January 1 through March 31, 2013 report, DOD’s reporting remained the same except that it addressed an additional element that had not previously been addressed. As a result, our analysis for this report showed it addressed 18 elements, partially addressed 3 elements and did not address 5 elements. Figure 2 summarizes our assessment of the extent to which DOD’s quarterly reports addressed the section 482 reporting elements. We assessed elements as being addressed when the information provided in the report was relevant to the reporting elements set out in section 482. For example, for training unit readiness and proficiency, each of the services provided their current and historical training readiness ratings. Similarly, for recruit quality, each of the services provided high school graduation rates of recruits. For some of the elements, DOD reported information that was incomplete or inconsistent across the services. Specifically, as shown below, for the three required reporting elements that DOD partially addressed, the information was incomplete, with only some services providing information on personnel stability, training operations tempo, and deployed equipment: Personnel stability: The Air Force, Marine Corps, and Navy provided information on retention rates, but the Army did not provide any information on this element. Training operations tempo: The Marine Corps and Navy provided information on the pace of training operations, but the Army and Air Force did not provide any information on this element. Deployed equipment: The Navy provided information on the number of ships deployed, but the other three services did not provide any information on this element. Further, in instances when the services reported information on a required element, they sometimes did so inconsistently, with varying amounts and types of information. For example: The Air Force and Marine Corps both reported information on the age of certain equipment items, but they did not report the same amount and type of information. The Air Force reported the average age of equipment by broad types of aircraft (e.g., fighters, bombers), while the Marine Corps reported average age of specific aircraft (e.g., F/A- 18, MV-22), as well as the age of its oldest equipment on hand, expected service life, and any impact of recapitalization initiatives on extending the expected service life of the equipment. The services all reported information on training commitments and deployments, but did not report the same amounts and types of information. First, the services used different timeframes when providing information on training commitments and deployments. The Army provided planned training events for fiscal years 2012 through 2018, the Air Force and Marine Corps provided planned training events for fiscal years 2012 through 2014, and the Navy did not provide any information on planned training events in the future. Second, the Air Force and the Navy provided information on the number of training events executed over the past two years, while the Army and Marine Corps did not. We found that the services have submitted different amounts and types of information to meet reporting elements because the Office of the Secretary of Defense has not provided guidance on the information to be included in the quarterly reports. Service officials told us they have received informal feedback from the Office of the Secretary of Defense regarding the data and charts they submit for inclusion in the quarterly readiness reports. For example, they have received informal suggestions for changes to how the readiness information is presented. However, service officials explained that they have not received clear guidance or instructions on the type and amount of information to present. As a result, the services have used their own judgment on the scope and content of readiness information they provide to meet the required reporting elements. Because the services report different types and amounts of information and DOD has not clarified what information should be reported to best address the required elements, the users of the report may not be getting a complete or consistent picture of the key indicators that relate to certain elements. For its three quarterly readiness reports that covered the period from April 1 through December 31, 2012, DOD did not provide any information on 6 of the 26 required elements, although in its January through March 2013 report DOD did provide information on 1 previously unaddressed element, specifically planned remedial actions. The required elements that remain unaddressed are personnel serving outside their specialty or grade, personnel morale, training funding, borrowed manpower, and the condition of nonpacing items. We found instances where information may exist within the department for some of these elements DOD did not report on. For example: Extent to which personnel are serving in positions outside their specialty or grade: The Navy internally reports fit and fill rates, which compare personnel available by pay grade and Navy skill code against the positions that need to be filled. Such information could potentially provide insight into the extent to which the Navy fills positions using personnel outside of their specialty or grade. Personnel morale: We found multiple data sources that provide information related to this required reporting element. For example, DOD’s Defense Manpower Data Center conducts a series of Web- based surveys called Status of Forces surveys, which include measures of job satisfaction, retention decision factors, and perceived readiness. Also, DOD’s Morale, Welfare, and Recreation Customer Satisfaction Surveys regularly provide information on retention decision indicators. Finally, the Office of Personnel Management conducts a regular survey on federal employees’ perceptions of their agencies called the Federal Employee Viewpoint Survey; the results of this survey are summarized in an Office of Personnel Management report, and provide insights into overall job satisfaction and morale at the department level. Training funding: DOD’s fiscal year 2014 budget request contained various types of information on training funding. For example, the request includes funding for recruit training, specialized skills training, and training support in the Marine Corps and similar information for the other services. Borrowed manpower: We found that the Army now requires commanders to report on the readiness impacts of borrowed military manpower in internal monthly readiness reports. Specifically, on a quarterly basis, beginning no later than June 15, 2013, senior leaders will brief the Secretary of the Army on borrowed manpower with a focus on training and readiness impacts. For the condition of nonpacing items element, officials from the Office of the Under Secretary of Defense for Personnel and Readiness noted that there is not a joint definition of nonpacing items across the services. The Army defines pacing items as major weapon systems, aircraft, and other equipment items that are central to the organization’s ability to perform its core functions/designed capabilities, but service officials reported that they do not collect any information related to nonpacing items. As noted previously, section 482 requires that DOD address all 26 reporting elements in its quarterly readiness reports to Congress. When asked why DOD did not provide information on certain required reporting elements, officials from the Office of the Under Secretary of Defense for Personnel and Readiness cited an analysis included in the implementation plan for its readiness report to Congress in 1998. This analysis concluded that DOD could not provide the required data at that time because, among other reasons, they lacked the metrics to capture the required data. In the 1998 implementation plan, DOD noted that addressing the section 482 reporting elements was an iterative process, recognizing that the type and quality of readiness information was likely to evolve over time as improvements to DOD’s readiness reporting and assessment systems came to fruition. DOD stated that it intended to continue to review and update or modify the readiness information as necessary to improve the report’s utility in displaying readiness. However, since it issued its initial implementation plan, DOD has not analyzed alternative information, such as Navy fit and fill rates or satisfaction survey results, which it could provide to meet the required reporting elements. DOD officials told us they intend to review the required reporting elements to determine the extent to which they can address some of the elements that they have consistently not reported on and, if they still cannot address the elements, to possibly request congressional modifications on the required content of the reports. However, they said that they had not yet begun or set a specific timetable for this review. Without analyzing alternative information it could provide to meet the required reporting elements, DOD risks continuing to provide incomplete information to Congress, which could hamper its oversight of DOD readiness. DOD has taken steps to improve the information in its Quarterly Readiness Reports to Congress over time. However, we found several areas where additional contextual information, such as benchmarks or goals, and clear linkages between reported information and readiness ratings, would provide decision makers a more complete picture of DOD’s readiness. Over time, based on its own initiative and specific congressional requests for information, DOD has added information to its reports. For example, in 2001, it added data on cannibalizations—specifically the rates at which the services are removing serviceable parts from one piece of equipment and installing them in another. This information was added in response to a requirement in the 2001 National Defense Authorization Act that the readiness reporting system measure “cannibalization of parts, supplies, and equipment.”data from the Defense Readiness Reporting System and detailed information on operational plan assessments. Operational plan assessments gauge combatant commands’ ability to successfully execute key plans and provide insight into the impact of sourcing and logistics shortfalls and readiness deficiencies on military risk. In 2009, it added brigade and regimental combat team deployment information. In 2006, DOD added capability-based assessment In compiling its January through March 2013 Quarterly Readiness Report to Congress, DOD made several structural changes to expand its reporting on overall readiness. Specifically, the Office of the Secretary of Defense added narrative information and other sections, and made more explicit linkages between resource needs and readiness deficiencies in order to convey a clearer picture of the department’s readiness status and concerns. In that report, DOD added: Narrative information detailing the impact of readiness deficiencies on overall readiness. Discussions of how the military services’ fiscal year 2014 budgets support their long-term readiness goals. Examples of remedial actions to improve service readiness. A section highlighting significant changes from the previous quarter. Office of the Secretary of Defense officials told us that they plan to sustain these changes in future quarterly readiness reports to Congress. We found several areas where adding contextual information to the quarterly readiness reports, such as benchmarks or goals, and clearer linkages between reported information and readiness ratings, would provide Congress with a more comprehensive and understandable report. Federal internal control standards state that decision makers need complete and relevant information to manage risks. This includes providing pertinent information that is identified and distributed in an understandable form. In some instances, the services report significant amounts of quantitative data, but do not always include information on benchmarks or goals that would enable the reader to distinguish between acceptable and unacceptable levels in the data reported. For example, when responding to the required reporting element on equipment that is not mission capable: The Marine Corps and Air Force report mission capable rates for all of their equipment, but do not provide information on related goals, such as the percentage of each item’s inventory that should be kept at various mission capability levels. The Navy reports on the number of ships that are operating with a mechanical or systems failure. While the Navy explains that this may or may not impact the mission capability of the vessel, it does not provide what it considers an acceptable benchmark for the number of ships that operate with these failures or the number of failures on each ship. In the absence of benchmarks or goals, the reader cannot assess the significance of any reported information because it is not clear whether the data indicate a problem or the extent of the problem. In other instances, the services have not fully explained the voluminous data presented on the required reporting elements or set the context for how it may or may not be connected to the information DOD provides in the report on unit equipment, training, and personnel readiness ratings and overall readiness. For example: The services provide detailed mission capable rate charts and supporting data for dozens of aircraft, ground equipment, and other weapons systems. For example, for the January through March 2013 readiness report, the services collectively provided 130 pages of charts, data, and other information on their mission capable equipment rates; this accounted for over 25 percent of the entire quarterly report. However, the services do not explain the extent to which these mission capable rates are, or are not, linked to equipment readiness ratings or overall readiness that is also presented in the quarterly reports. In the area of training, the Navy provides data showing the number of training exercises completed over the past two years, but does not provide any explanation regarding how this information affects training readiness ratings that are also presented in the quarterly reports. In the area of logistics, although the Army and the Air Force provide depot maintenance backlogs, they do not explain the effect the backlogs have on unit readiness that is also discussed in the report. Specifically, those services do not explain whether units’ readiness is affected or could be affected in the future because maintenance was not accomplished when needed. Without providing additional contextual information, such as benchmarks and clearer linkages, it is unclear how, if at all, the various data on the required elements affected unit and overall readiness. To oversee DOD’s efforts to maintain a trained and ready force, and make decisions about related resource needs, congressional decision makers need relevant, accurate, and timely readiness information on the status of the military forces. DOD continues to address many of the required reporting elements in its quarterly readiness reports to Congress and has periodically revised the content of the information it presents, which is an important step to making the reports more useful. However, as reflected in its more recent reports for 2012 and 2013, DOD has not always reported or fully reported on some elements, and sometimes presents detailed readiness data without sufficient context on how this information relates to or affects the information it provides on overall readiness or readiness in specific resource areas, such as equipment, personnel, and training. Without further analyzing whether information is available within the department to address the elements that it is not currently addressing, DOD cannot be sure that it has the information it needs to enhance the quality of its reporting or present options to the Congress for adjusting reporting requirements. Furthermore, unless DOD provides guidance to the services on the amount and types of information to be included in the quarterly reports, including requirements to provide contextual information such as criteria or benchmarks for distinguishing between acceptable and unacceptable levels in the data reported, DOD is likely to continue to be limited in its ability to provide Congress with complete, consistent, and useful information. To improve the information available to Congress in its quarterly readiness reports, we recommend that the Secretary of Defense direct the Office of the Under Secretary of Defense for Personnel and Readiness to take the following three actions: Analyze alternative sources of information within DOD that it could provide to meet required reporting elements that DOD has not addressed in past reports; Issue guidance to the services on the type and amount of information to be included in their submissions for the quarterly readiness report; and Incorporate contextual information in the quarterly readiness reports such as clear linkages between reported information on the required elements and readiness ratings, and benchmarks for assessing provided data to enable the reader to distinguish between acceptable and unacceptable levels in the data reported. In written comments on a draft of this report, DOD concurred with two recommendations and partially concurred with one recommendation. DOD’s comments are reprinted in their entirety in appendix II. DOD provided technical comments during the course of the engagement, and these were incorporated as appropriate. In its overall comments, DOD noted that the goal of DOD is to provide the most accurate and factual representation of readiness to Congress through its quarterly reports and their ability to accomplish this relies upon our recommendations, which should facilitate improvements. DOD stated that our recommendations will be incorporated in the ongoing process of producing the quarterly readiness reports and will hopefully improve the ability to interpret the product while assisting the services in relaying their readiness concerns. DOD also provided detailed comments on each of our recommendations. DOD partially concurred with our recommendation that the Secretary of Defense direct the Office of the Under Secretary of Defense for Personnel and Readiness to analyze alternative sources of information within DOD that it could provide to meet required reporting elements that DOD has not addressed in past reports. DOD stated the iterative process that is used to improve quarterly readiness reports to Congress will continue to seek alternative sources of information that could provide a more holistic picture of readiness across the force and that improvements in reporting capabilities and adjustments to reported readiness information should be available to provide all of the information required by section 482 of Title 10. DOD noted that it provides information on one required element, training funding, within its annual budget requests. DOD stated it will investigate ways to incorporate surrogate methods of reporting in future reports. DOD concurred with our recommendation that the Secretary of Defense direct the Office of the Under Secretary of Defense for Personnel and Readiness to issue guidance to the services on the type and amount of information to be included in their submissions for the quarterly readiness report. DOD stated that it will continue to issue guidance to the individual services regarding types and amounts of information that may improve the readiness analysis and advance the comparative nature of separate services. DOD stated that the individual services may use distinct measures to determine specific levels of their readiness and the ability to compare these measures may not be possible or occur quarterly. Where feasible, DoD stated it will continue to attempt to align information and improve the clarity of readiness throughout the department. DOD concurred with our recommendation that the Secretary of Defense direct the Office of the Under Secretary of Defense for Personnel and Readiness to incorporate contextual information in the quarterly readiness reports such as clear linkages between reported information on the required elements and readiness ratings, and benchmarks for assessing provided data to enable the reader to distinguish between acceptable and unacceptable levels in the data reported. DOD stated that a concerted effort is made to continuously improve the quality of analysis as well as assist with the explanation of linkages between raw data and readiness. DOD stated that this effort is tempered with the need to reduce the volume of information and provide sound examination of the effects of this data on the force. DOD noted a succinct version of readiness is provided in the executive summary included in recent reports. DOD also noted that a longer narrative supplement will continue to be provided in an attempt to enhance the clarity of the linkages and judgment of acceptability regarding the reported readiness across the force. We are sending copies of this report to the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, the Secretary of the Air Force, the Secretary of the Army, the Secretary of the Navy, the Commandant of the Marine Corps, and appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9619 or pickups@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To determine the extent to which the Department of Defense (DOD) addressed required reporting elements in its quarterly readiness reports to Congress, we reviewed legislation governing DOD readiness reporting, including provisions in Title 10, and interviewed DOD officials. We analyzed the four most recent Quarterly Readiness Reports to Congress that covered the period from April 1, 2012 through March 31, 2013 and compared the reported readiness information in these reports to the Title 10 requirements to identify any trends, gaps, or reporting Specifically, we developed an evaluation tool based on inconsistencies.Title 10 section 482 reporting requirements to assess the extent to which the April through June 2012, July through September 2012, October through December 2012, and January through March 2013 Quarterly Readiness Reports to Congress addressed these elements. Using scorecard methodologies, two GAO analysts independently evaluated the quarterly readiness reports against the elements specified in section 482. The analysts rated compliance for each element as “addressed” or “not addressed.” After the two analysts completed their independent analyses, they compared the two sets of observations and discussed and reconciled any differences. We also interviewed officials from the Office of the Under Secretary of Defense for Personnel and Readiness, the Joint Staff Readiness Division, and each of the military services and obtained additional information and the officials’ views of our assessments, as well as explanations of why certain items were not addressed or not fully addressed. To determine what additional information, if any, could make the reports more useful, we reviewed the types of readiness information DOD uses internally to manage readiness contained in documents such as the Joint Force Readiness Review and various service-specific readiness products, and compared their formatting and contents to the four reports identified above. We reviewed the content of these reports in the context of federal internal control standards, which state that decision makers need complete and relevant information to manage risks. This includes pertinent information that is identified and distributed in an understandable form. We interviewed officials from the Office of the Under Secretary of Defense for Personnel and Readiness, the Joint Staff Readiness Division, and each of the military services and discussed the procedures for compiling and submitting readiness information for inclusion in the quarterly readiness reports, changes in the reports over time, and the Office of the Secretary of Defense’s process for compiling the full report. We also identified adjustments DOD has made to its reports, including changes the Office of the Under Secretary of Defense for Personnel and Readiness made in preparing the January through March 2013 report, and the underlying reasons for these adjustments, as well as obtained the views of officials as to opportunities to improve the current reporting. We conducted this performance audit from August 2012 through July 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Michael Ferren, Assistant Director; Richard Burkard; Randy Neice; Amie Steele; Shana Wallace; Chris Watson; and Erik Wilkins-McKee made key contributions to this report.
Congress and DOD need relevant, accurate, and timely readiness information to make informed decisions about the use of military forces, and related resource needs. To that end, Congress requires DOD to submit a quarterly readiness report addressing various elements related to overall readiness, personnel, training, and equipment. A committee report accompanying the National Defense Authorization Act for Fiscal Year 2013 mandated GAO report on the type of readiness information available to Congress and DOD decision makers and the reported readiness of U.S. forces. In May 2013, GAO provided a classified report on readiness trends of DOD forces. For this report, GAO evaluated (1) the extent to which DOD addressed required reporting elements in its quarterly readiness reports to Congress, and (2) what additional information, if any, could make the reports more useful. GAO analyzed various readiness reports and supporting documentation, and interviewed cognizant officials. In its quarterly readiness reports that covered the period from April 2012 through March 2013, the Department of Defense (DOD) addressed most but not all required reporting elements. Section 482 of Title 10 of the U.S. Code requires DOD to report on 26 elements including readiness deficiencies, remedial actions, and data specific to the military services in the areas of personnel, training, and equipment. In analyzing DOD's reports, GAO found that DOD addressed 18 of the 26 elements, partially addressed 3 elements and did not report on 5 elements. For the elements partially addressed--personnel stability, training operations tempo, and deployed equipment--reporting was incomplete because some services reported information and others did not report. When all the services reported on an element, they at times did so inconsistently, with varying amounts and types of information. For example, the services all reported information on training commitments and deployments, but used different timeframes when providing information on planned training events in the future. The services reported differently because DOD has not provided guidance on the information to be reported. For the elements that DOD did not address, including borrowed manpower and training funding, GAO found that information may exist in the department but is not being reported to Congress. For example, the Army now requires commanders to report monthly on the readiness impacts of borrowed military manpower and DOD's budget requests include data on training funding. However, DOD has not taken steps to analyze whether this information could be used to meet the related reporting element. Without issuing guidance on the type and amount of information to be included by each service and analyzing alternative information it could provide to meet the required elements, DOD risks continuing to provide inconsistent and incomplete information to Congress. DOD has taken steps to improve its quarterly readiness reports to Congress, but additional contextual information would provide decision makers a more complete picture of DOD's readiness. Over time, based on its own initiative and congressional requests, DOD has added information to its reports, such as on operational plan assessments. In its most recent report, DOD added narrative information detailing the impact of readiness deficiencies on overall readiness and a discussion of how the services' budgets support their long-term readiness goals. Federal internal control standards state that decision makers need complete and relevant information to manage risks, and GAO found several areas where DOD could provide Congress with more comprehensive and understandable information if it added some additional context to its reports. For example, in some instances, the services report significant amounts of quantitative data, but do not include information on benchmarks or goals that would enable the reader to determine whether the data indicate a problem or the extent of the problem. For example, the Marine Corps and Air Force report mission capable rates for their specific equipment items, but do not provide information on related goals, such as the percentage of the inventory that should be kept at various capability levels. In other instances, the services have not fully explained any connections between the voluminous data they report on the required elements and the information DOD provides in the report on unit and overall readiness ratings. Without providing additional contextual information, DOD's quarterly reports may not provide clear information necessary for congressional oversight and funding decisions. GAO recommends that DOD analyze alternative sources of information that could be used to meet the required reporting elements, issue guidance on the type and amount of information to be included by each service, and incorporate contextual information to improve the clarity and usefulness of reported information. DOD generally agreed with the recommendations.
The E-Gov Act was enacted into law on December 17, 2002. The act’s provisions add to a variety of previously established statutory requirements regarding federal information and IT management, such as the Paperwork Reduction Act, which also prescribes responsibilities within OMB for overseeing information and IT management in the federal government. Appendix II provides further details on the statutory framework for federal information and IT management. Even before passage of the E-Gov Act, OMB was working on e-government issues, primarily through its Office of Information and Regulatory Affairs (OIRA) and through the activities of the Associate Director for Information Technology and E-Government (the predecessor position to the current Administrator of the Office of Electronic Government). In February 2002, OMB issued its first E-Government Strategy and designated 24 high-profile initiatives to lead the government’s transformation to e-government. Title I of the E-Government Act established the Office of Electronic Government within OMB, to be headed by an Administrator. The Administrator’s responsibilities include assisting the Director in carrying out the act and other e-government initiatives, including promoting innovative use of IT by agencies, overseeing the E-Government Fund, and leading the activities of the federal Chief Information Officers Council; working with the OIRA Administrator in setting strategic direction for e-government under relevant laws, including the Paperwork Reduction Act and the Clinger-Cohen Act; and working with the OIRA Administrator and other OMB offices to oversee implementation of e-government under the act and other laws, including the Paperwork Reduction Act, relating to IT management, enterprise architecture, information security, privacy, access, dissemination, preservation, accessibility of IT for persons with disabilities, and other areas of e-government. Title II of the E-Gov Act contains 16 sections that include a range of provisions aimed at promoting electronic government services and increasing citizen access to and participation in government. The sections of Title II address such topics as maintaining and promoting a federal Internet portal to make government information more accessible to the public, protecting the privacy of personal information, establishing a framework for use of electronic signatures for secure transactions with government, and providing online access to documents filed electronically with federal courts. Appendix I contains a complete list of the Title II sections included in our review. Overall, OMB and federal agencies have made progress implementing Titles I and II of the E-Gov Act. In April 2003, OMB established the Office of E-Government (also known as the Office of E-Government and Information Technology) and designated its Assistant Director for IT and E-Government as its Administrator. Also in April 2003, OMB issued its second E-Government Strategy, which laid out its approach to implementing the E-Gov Act. In August 2003, OMB issued guidance to agencies on implementing the act, and in March 2004, it issued its first annual report to Congress on implementation of the act. In its report to Congress, OMB summarized individual agency e-gov reports, described actions taken to address the act’s provisions, and provided details of the operation of the E-Government Fund. As shown in table 1, OMB and designated federal agencies have taken steps to implement the provisions of most of the major sections of Titles I and II of the E-Gov Act that we reviewed. Specifically, apart from general requirements applicable to all agencies, OMB and designated agencies have already implemented the provisions of 7 of the 18 major sections, have actions in progress to address provisions of another 7 sections, and have not fully addressed provisions of the remaining 4 sections. Each of these 18 sections includes many specific provisions, such as developing and issuing guidance and policies, conducting studies, initiating pilot projects, and establishing specific programs and working groups. Appendix III contains details of the specific provisions in each of these sections and their current implementation status. OMB and designated federal agencies are taking actions to implement the provisions of the act in most cases; however, the act’s requirements have not always been fully addressed. In several cases, actions taken do not satisfy the requirements of the act, or no significant action has been taken. In most cases, OMB and designated federal agencies have taken responsive action to address the act’s requirements with statutory deadlines, although these have not always been completed within stipulated time frames. For example, OMB established the Interagency Committee on Government Information in June 2003, within the deadline prescribed by the act. The committee is to develop recommendations on the categorization of government information and public access to electronic information. In another example, as required by section 211, GSA developed and issued procedures for the acquisition of IT by state and local governments through Federal Supply Schedules, which previously had been available only to federal agencies. Although the act required that the procedures be issued by January 17, 2003, GSA did not finalize the new procedures until May 2004. The agency had issued a proposed rule to implement the procedures on January 23, 2003, and an interim rule on May 7, 2003. In one case, OMB has not taken fully responsive action to address the requirements of the act. Specifically, OMB did not ensure that a study on using IT to enhance crisis preparedness and response was conducted that addresses the content specified by the act. Section 214 of the act required, within 90 days of enactment, that OMB ensure that this study is conducted, and it specifies the content of the study. For example, the study was required to address a research and implementation strategy for the effective use of IT in crisis response and consequence management. OMB was further required to report on findings and recommendations from this study within 2 years of the study’s initiation. According to DHS officials, a study conducted by the MITRE Corporation for Project SAFECOM fulfills this requirement. However, the MITRE study—which was chiefly an assessment of a Web tool for disseminating information about solutions to the problem of interoperability among first responders’ communications systems—did not address the content specified by the act. For example, the study did not include research regarding use of IT to enhance crisis preparedness, nor did it include a research and implementation strategy for more effective use of IT in crisis response and consequence management. Until the required elements of the study are addressed, OMB may not be able to make a fully informed response to the act’s requirement that it report on findings and recommendations for improving the use of IT in coordinating and facilitating information on disaster preparedness, response, and recovery. In another case, GSA has not taken fully responsive action to address the requirements of the act. Specifically, Section 215 required the Administrator of GSA to contract with NAS by March 17, 2003, to conduct a study on disparities in Internet access for online government services. GSA was to submit a report to Congress on the findings, conclusions, and recommendations of the study by December 2004. GSA officials reported that they were unable to request funds as part of the fiscal year 2003 or 2004 budget cycles because the act passed in December 2002, after fiscal year 2003 had begun and the deadline for fiscal year 2004 agency budget submissions (August 2002) had passed. Although GSA officials did not provide any information regarding their actions for fiscal year 2005, they reported that the agency had requested the funds authorized in the act for the fiscal year 2006 budget cycle and was working on compiling an interim study based on existing research on disparities in access to the Internet. This compilation report is expected to be completed by December 2004 and submitted to Congress in OMB’s annual report on implementation of the act. For those provisions with future deadlines, OMB and agencies have taken action to implement the act. For example, under section 207 of the act, by December 2004, the Interagency Committee on Government Information must submit recommendations to OMB and to the Archivist of the United States on the categorization of government information and how to apply the Federal Records Act to information on the Internet and other electronic records. The committee structure, work plans, and interim products show progress toward meeting this deadline. As another example, under section 205 of the act, federal courts are required to establish Web sites by April 2005 that provide information such as location, contact information, and local and individual rules. By April 2007, these sites must also provide access to documents that are filed electronically. In June 2004, officials from the Administrative Office of the Courts reported that all 198 federal courts had established Web sites, 10 months before the April 2005 deadline. Court officials also reported that the individual court Web sites were making progress providing the information stipulated in the act and that 128 of the courts already allowed access to documents filed electronically, in advance of the April 2007 deadline. As with the provisions specifying deadlines, in most cases where deadlines are not specified, OMB and federal agencies have either fully implemented the provisions or demonstrated positive action toward implementation. For example, in May 2003, the E-Gov Administrator issued a memorandum detailing procedures for requesting funds from the E-Government Fund, although the act did not specify a deadline for this action. As stipulated by the act, the E-Government Fund is to be used to support projects that enable the federal government to expand its ability to conduct activities electronically. Similarly, section 208 requires the Director of OMB to develop policies and guidelines for agencies on the conduct of privacy impact assessments but does not stipulate a deadline. In September 2003, OMB issued guidance for implementing the privacy provisions of the E-Government Act, including guidance on conducting privacy impact assessments. In two instances in which statutory deadlines were not specified, OMB’s actions have not yet fully addressed the act’s requirements. Specifically: OMB has not established a program to satisfy the requirements in section 101 (44 U.S.C. 3605), which requires the Administrator to establish and promote a governmentwide program to encourage contractor innovation and excellence in facilitating the development and enhancement of electronic government services and processes. OMB officials reported that no program had been established specifically to satisfy the requirements of 44 U.S.C. 3605. The OIRA Information Policy and Technology (IPT) Branch Chief and other OMB officials stated that they believed the mandated program was not necessary because the functions of such a program were being accomplished through other ongoing OMB initiatives, such as the SmartBuy initiative, the Federal Business Opportunities (FedBizOpps) Web portal, and the recently inaugurated “lines of business” initiatives. Specifically, the officials stated that a recently issued request for information (RFI) for several of the lines of business initiatives addressed the act’s requirement that, under the stipulated program, announcements be issued seeking unique and innovative solutions. However, while OMB’s recent RFI represents one example of an announcement seeking innovative solutions, it does not represent a commitment to issuing such announcements and promoting innovative solutions on an ongoing basis. In contrast, establishing a dedicated program—as stipulated by the act—would represent such a commitment. Until OMB establishes such a program, it is at risk of not fully meeting the objective of this section to encourage contractor innovation and excellence in facilitating the development and enhancement of electronic government services and processes. OMB has not yet taken sufficient action to ensure the development and maintenance of a repository and Web site of information about research and development funded by the federal government, as required by section 207 of the act. In its fiscal year 2003 report to Congress, OMB reported that an analysis had been conducted of the National Science Foundation’s “Research and Development in the United States” database system and that the system was closely aligned with the act’s requirements. However, OMB also said it had not yet determined whether the National Science Foundation’s system would serve as the repository required by the act. Until OMB decides on a specific course of action, it may not fully meet the objective of section 207 to improve the methods by which government information, including information on the Internet, is organized, preserved, and made accessible to the public. In most cases, OMB and designated federal agencies have made progress in addressing the specific requirements of the E-Government Act of 2002. OMB and federal agencies made efforts to implement provisions before the expiration of statutory deadlines that have now passed, and they are also taking positive steps toward implementing provisions without deadlines or with deadlines in the future. Despite the overall progress, in several cases, actions taken do not satisfy the requirements of the act, or no significant action has been taken. In one case—the requirement to conduct a study in disparities in access to the Internet—the responsible agency, GSA, is taking steps to address the act’s requirements, even though a statutory deadline has already passed. In other cases, OMB has either taken actions that are related to the act’s provisions but do not fully address them, or it has not yet made key decisions that would allow actions to take place. Specifically, OMB has not ensured that a study on using IT to enhance crisis preparedness and response has been conducted that addresses the content specified by the act, established a required program to encourage contractor innovation and excellence in facilitating the development and enhancement of electronic government services and processes, or ensured the development and maintenance of a required repository and Web site of information about research and development funded by the federal government. Until these issues are addressed, the government is at risk of not fully achieving the objective of the E-Government Act to promote better use of the Internet and other information technologies to improve government services to its citizens, internal government operations, and opportunities for citizen participation in government. To ensure the successful implementation of the E-Government Act and its goal of promoting better use of the Internet and other information technologies to improve government services to citizens, internal government operations, and opportunities for citizen participation in government, we recommend that the Director, OMB, direct the Administrator of the Office of E-Government to carry out the following three actions: ensure that the report to Congress regarding the study on enhancement of crisis response required under section 214 addresses the content specified by the act; establish and promote a governmentwide program, as prescribed by 44 U.S.C. 3605, to encourage contractor innovation and excellence in facilitating the development and enhancement of electronic government services and processes; and ensure the development and maintenance of a governmentwide repository and Web site that integrates information about research and development funded by the federal government. We received oral comments on a draft of this report from representatives of OMB’s Offices of Information and Regulatory Affairs, E-Government, and General Counsel. We also received oral comments from representatives of DHS’s Development Science and Technology Directorate and GSA’s Office of Governmentwide Policy. These representatives generally agreed with the content of our draft report and our recommendations and provided technical comments, which have been incorporated where appropriate. GSA officials also provided updated information regarding the status of the required actions under the community technology centers provision of the act (section 213), which has been incorporated in the report. Regarding our recommendation that OMB ensure that its report to Congress regarding the study on enhancement of crisis response addresses the content specified by the act (section 214), OMB officials agreed that the study conducted by Project SAFECOM did not address the requirements of the act. OMB officials stated that a new study would be initiated to meet the requirements of the act. Regarding our recommendation that OMB establish and promote a governmentwide program, as prescribed by 44 U.S.C. 3605, to encourage contractor innovation and excellence in facilitating the development and enhancement of electronic government services and processes, OMB officials reiterated their position that OMB’s ongoing activities address the substance of the required program and that establishing a separate new program could introduce delay. The officials stated that a recently issued RFI for several of the recently inaugurated “lines of business” initiatives is an example of an announcement seeking innovative solutions, as required by the act. We made changes to the report to reflect that the RFI partially addressed the act’s requirements. However, while the RFI represents one example of an announcement seeking innovative solutions, it does not represent a commitment to issuing such announcements and promoting innovative solutions on an ongoing basis. In contrast, establishing a dedicated program—as stipulated by the act—would represent such a commitment. Until OMB establishes such a program, it is at risk of not fully meeting the objective of this section to encourage contractor innovation and excellence in facilitating the development and enhancement of electronic government services and processes. Unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will provide copies of this report to the Director of OMB, the GSA Administrator, and the Secretary of Homeland Security. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you have any questions about this report, please contact me at (202) 512-6240 or John de Ferrari, Assistant Director, at (202) 512-6335. We can also be reached by e-mail at koontzl@gao.gov and deferrarij@gao.gov, respectively. Other key contributors to this report included Barbara Collier, Sandra Kerr, David F. Plocher, and Jamie Pressman. Our objective was to assess the implementation status of major provisions of the E-Government Act of 2002, Titles I and II. Titles I and II of the act contain numerous provisions that cover a wide range of activities across the federal government aimed at promoting electronic government. Because it was not feasible to conduct in-depth assessments of all the provisions of Titles I and II for this engagement, we conducted a high-level review of the implementation status of major provisions of the act, determining whether actions have been taken or are under way to address their major provisions. Listed below are the sections included in this review, as agreed. Title I, Section 101—Management and Promotion of Electronic Title II, Section 203—Compatibility of Executive Agency Methods for Use and Acceptance of Electronic Signatures Title II, Section 204—Federal Internet Portal Title II, Section 205—Federal Courts Title II, Section 206—Regulatory Agencies Title II, Section 207—Accessibility, Usability, and Preservation of Title II, Section 208—Privacy Provisions Title II, Section 209—Federal IT Workforce Development Title II, Section 211—Authorization for Acquisition of Information Technology by State and Local Governments through Federal Supply Schedules Title II, Section 212—Integrated Reporting Study and Pilot Projects Title II, Section 213—Community Technology Centers Title II, Section 214—Enhancing Crisis Management through Advanced Information Technology Title II, Section 215—Disparities in Access to the Internet Title II, Section 216—Common Protocols for Geographic Information We did not include section 201, which provides definitions, or section 202, which prescribes general requirements applying to all major federal agencies, in our review. Similarly, for the sections we reviewed, we did not assess the implementation of general requirements applying to all federal agencies, such as sections 203(b), which addresses agency implementation of electronic signatures; 207(e)(4), which requires annual agency reporting on accessibility, usability, and preservation of government information; 207(f)(2), stipulating agency requirements for making government information available on the Internet or by other means; 207(g)(2), requiring agencies to provide information for the repository on federal research and development; 208(b)(1), which stipulates agency requirements related to privacy impact assessments; and 209(b)(2) and (4), stipulating requirements for agency information technology training programs. For section 206, we assessed governmentwide implementation by reviewing the status of the e-Rulemaking initiative. Finally, we did not assess Section 210, which concerns share-in-savings contracts, since this section mandates a separate, more in-depth GAO review on the implementation and effects of this provision at a future date. To assess the implementation status of the major provisions, we interviewed cognizant officials from the Office of Management and Budget (OMB), the General Services Administration (GSA), and other agencies that have specific responsibilities under Title II. For several sections, the act requires specific actions, such as the initiation of pilot projects, establishment of interagency workgroups or committees, development and issuance of guidance/policies, conduct of a study, or issuance of reports. The majority of these actions include statutory deadlines for completion. For provisions with deadlines that have passed, we determined whether the requirement had been met. For provisions with deadlines that had not yet expired or that had no explicit deadline attached, we obtained information on actions taken and progress made to date. We analyzed relevant documentation, including OMB’s fiscal year 2003 report to Congress on implementation status of the E-Gov Act. We determined the implementation status of the major provisions by comparing the information we obtained to the requirements established in the act. We assessed the overall status of the major sections according to the following three categories: 1. Implemented. A section was assessed as implemented if the responsible agency had completed responsive actions to address each of the section’s requirements that we reviewed. 2. In progress. We assessed status as “in progress” if responsive action was under way to address each of the section’s requirements, even if statutory deadlines had not been fully met. 3. Not fully addressed. We assessed a section’s status as not fully addressed when an agency had taken actions that did not meet the requirements specified in the act or had not taken action on requirements with imminent or expired deadlines. Our work was conducted from April 2004 to September 2004, in accordance with generally accepted government auditing standards. For more than 20 years before the enactment of the E-Government Act, the management of federal information and information technology (IT) was governed by a number of issue-specific laws and one law that coordinates across those issue areas. Examples of the issue-specific laws are the Privacy Act, which governs the protection of personal privacy in government records; the Freedom of Information Act, which provides for public access to government information; and the Clinger-Cohen Act, which applies investment control concepts to IT management. The coordinating law is the Paperwork Reduction Act (PRA). Like the E-Government Act, the Paperwork Reduction Act gives management responsibilities to agencies and oversight responsibilities to the Office of Management and Budget (OMB). The PRA, as first enacted in 1980 and as significantly revised in 1995, established the concept of “information resources management” (IRM) to coordinate information and IT management functions throughout the information life cycle, from collection through disposition. The PRA established the OMB Office of Information and Regulatory Affairs (OIRA) for governmentwide oversight and stated that the Administrator of OIRA should “serve as principal adviser to the Director on Federal information resources management policy.” Under the PRA IRM umbrella, OIRA is responsible for overseeing information collection and the control of paperwork, including review of agency information collection proposals; statistical policy and coordination; records management, including oversight of compliance with the privacy, including oversight of compliance with the Privacy Act; information security, including oversight of compliance with the Federal Information Security Management Act; information disclosure, including oversight of compliance with the Freedom of Information Act; and information technology, including oversight of the Clinger-Cohen Act and promoting the use of information technology “to improve the productivity, efficiency, and effectiveness of Federal programs, including through dissemination of public information and the reduction of information collection burdens on the public.” The E-Government Act of 2002 added to OMB’s statutory PRA duties with requirements to promote “electronic government,” defined as government use of Web-based Internet applications and other information technologies to enhance access to and delivery of government information and services and to improve government operations. To oversee these electronic government activities, the E-Government Act created the OMB Office of Electronic Government, to be headed by an Administrator. The E-Gov Administrator’s responsibilities include assisting the Director in carrying out the act and other e-government initiatives and working with the OIRA Administrator and other OMB offices to oversee implementation of e-government under the E-Government Act and other laws, including the PRA. Both the OIRA Administrator and the E-Government Administrator report to the OMB Deputy Director for Management, who exercises all functions of the OMB Director with regard to information policy and other management functions under 31 U.S.C. 503(b), as enacted by the Chief Financial Officers Act of 1990 (Pub. L. 101-576, Nov. 15, 1990). Titles I and II of the E-Government Act of 2002 include provisions covering a wide range of activities across the federal government aimed at promoting electronic government. The Office of Management and Budget (OMB) and other federal agencies likewise have a variety of activities under way that address these provisions. This appendix summarizes the status of implementation of the act’s requirements that we reviewed. As noted in appendix I, we did not review all sections of Titles I and II, nor did we review the implementation of general requirements applying to all federal agencies. Section 3602 of Title 44 of the U.S. Code establishes the Office of Electronic Government (E-Government) within OMB, which is to be headed by a presidentially appointed Administrator. The Administrator is required to assist both the OMB Director and the Deputy Director for Management, as well as work with the Administrator of the Office of Information and Regulatory Affairs (OIRA) in setting strategic direction for and assisting in implementing electronic government. In addition, the Director is to ensure that there are adequate resources in OMB to carry out its functions under the act. OMB has taken responsive action to address the requirements of this section. The Office of E-Government was established on April 17, 2003, with an Administrator appointed on the same day. OMB officials stated that this office, working closely with the OIRA Administrator and OIRA’s IPT Branch, has taken steps to carry out the functions specified in the act. For example, to set strategic direction for electronic government, OMB issued an E-Government Strategy in April 2003. OMB officials said they plan to issue an update to the E-Government Strategy during the fall of 2004. OMB had been working on electronic government issues before the E-Gov Act was passed and the Office of E-Government officially established. For example, OMB issued its first E-Government Strategy in February 2002, which designated a number of high-profile initiatives to lead the government’s transformation to e-government. This work was performed through OIRA’s IPT Branch and supervised by the Associate Director for Information Technology and E-Government (a position that was the predecessor to the current E-Government Administrator position). OMB officials reported that under the current organizational structure, the E-Government Administrator works collaboratively with the OIRA Administrator (primarily through working with OIRA’s IPT Branch) to carry out the requirements of the act. Along with its E-Government Strategies, OMB officials cited its oversight of the e-government initiatives as examples of setting strategic direction for e-government. Regarding resources for carrying out the functions of the act, OMB officials reported that as of June 14, 2004, the Office of E-Government consisted of eight full-time positions, including the Administrator, Deputy Administrator, Special Assistant, Chief Architect, and four Portfolio Managers. In addition, four employees on detail from other agencies provide further assistance. Finally, there are 12 employees in OIRA’s IPT Branch who also support the activities of the Office of E-Gov. Accordingly, the IPT Branch chief reports both to the Administrator of the Office of E-Government and to the Administrator of OIRA. Section 3603 of Title 44 codifies the establishment, structure, and responsibilities of the Chief Information Officers (CIO) Council, which was established on July 16, 1996, by Executive Order 13011. The CIO Council’s responsibilities include developing recommendations for information and information technology (IT) management policies, procedures, and standards; sharing management best practices; and assessing and addressing the needs of the federal government’s IT workforce. The CIO Council has taken responsive action to address the requirements of this section of the act. Membership on the CIO Council includes CIOs from federal executive agencies, the OMB Deputy Director for Management, the E-Government Administrator, and the OIRA Administrator. The E-Government Administrator is to lead the council on behalf of the Deputy Director for Management, who serves as the council chair. According to its strategic plan for fiscal year 2004, the CIO Council’s structure and activities are aligned with the applicable provisions of the E-Gov Act. (Fig. 1 shows the organization of the CIO Council.) For example, the Best Practices Committee has published recommendations and experiences on the CIO Council’s Web site (www.cio.gov) and contributed to the development of resources such as its report on Lessons Learned on Information Technology Performance Management, which is also available on the Web site. In addition, the Architecture and Infrastructure Committee has provided models for a component-based architecture, which assists agencies in identifying opportunities to share information resources. Furthermore, the Workforce and Human Capital for IT Committee is working with the Office of Personnel Management (OPM) to address issues regarding recruitment and development of the federal IT workforce. Section 3604 of Title 44 establishes the E-Government Fund, which is to be used to support projects that enable the federal government to expand its ability to conduct activities electronically. The Director of OMB, assisted by the E-Government Administrator, approves which projects will receive support from the E-Government Fund. The E-Government Administrator is required to establish procedures for accepting and reviewing proposals for funding. In addition, the Director of OMB is required to report on the operation of the fund in OMB’s annual report to Congress on the implementation status of the E-Government Act. GSA is responsible for administration of the fund and is required to submit to Congress a notification of how the funds are to be allocated to projects approved by OMB. Table 2 summarizes the actions required by this provision. OMB has taken responsive action to address the requirements of this section. In May 2003, the agency issued a memorandum detailing procedures for requesting funds from the E-Government Fund. The memorandum establishes a process for submitting proposals and details the process by which OMB will review proposals. In March 2004, OMB submitted its first annual report to Congress on implementation of the E-Government Act. As required by 44 U.S.C. 3604, this report detailed the operations of the E-Government Fund for fiscal years 2002 to 2003. Also, in accordance with its responsibilities in administering the fund, GSA submitted notifications and descriptions to Congress on how the e-gov funds were to be allocated and spent for the approved projects. Table 3 summarizes the projects funded for fiscal years 2002 to 2004, as reported by GSA’s notifications to Congress. In fiscal years 2003 and 2004, the amount requested by OMB for the fund was close to the amount authorized by the act, yet in the fiscal year 2005 budget, $5 million was requested although $100 million was authorized. An OMB official stated that OMB requested significantly less than what was authorized by the act because it was seeking authority in fiscal year 2005 to allow surplus receipts in the General Supply Fund to be spent on e-government projects. Section 3605 of Title 44 requires the Administrator of the Office of E-Government to establish and promote a governmentwide program to encourage contractor innovation and excellence in facilitating the development and enhancement of e-government services and processes. Under this program, the E-Government Administrator is required to issue announcements seeking innovative solutions as well as convene a multiagency technical assistance team to screen proposals. The E-Government Administrator is to either consider the screened proposals for funding from the E-Government Fund or forward the proposals to the appropriate executive agencies. Table 4 summarizes the actions required by this provision. OMB has not fully addressed the requirements of this section of the act. OMB officials reported that no program had been established specifically to satisfy the requirements of 44 U.S.C. 3605. The OIRA IPT Branch Chief and other OMB officials stated that they believed the mandated program was not necessary because the functions of such a program were being accomplished through other ongoing OMB initiatives, such as the SmartBuy initiative, the Federal Business Opportunities (FedBizOpps) Web portal, and the recently inaugurated “lines of business” initiatives. Specifically, the officials stated that a recently issued request for information (RFI) for several of the lines of business initiatives addressed the act’s requirement that, under the stipulated program, announcements be issued seeking unique and innovative solutions. However, while OMB’s recent RFI represents one example of an announcement seeking innovative solutions, it does not represent a commitment to issuing such announcements and promoting innovative solutions on an ongoing basis. In contrast, establishing a dedicated program—as stipulated by the act— would represent such a commitment. Until OMB establishes such a program, it is at risk of not fully meeting the objective of this section to encourage contractor innovation and excellence in facilitating the development and enhancement of electronic government services and processes. Section 3606 of Title 44 requires the Director of OMB to develop an annual e-government status report and submit it to Congress (see table 5). The report is required to summarize information reported by agencies, describe compliance with other goals and provisions of the act, and detail the operation of the E-Government Fund. OMB has taken responsive action to address the requirements of this section. The agency submitted its first annual E-Government Act status report to Congress in March 2004. The report was based on individual agency e-government reports submitted to OMB in December 2003 and supplemented by fiscal year 2005 agency budget submissions, as appropriate. OMB’s e-government status report contained the required elements described above. Section 203 of the E-Government Act addresses implementation of electronic signatures to enable secure electronic transactions with the government. The provision in this section that we reviewed directs the GSA Administrator, supported by the Director of OMB, to establish a framework that allows for efficient interoperability among executive agencies when using electronic signatures, including processing of digital signatures. Table 6 summarizes the actions required by this provision. GSA, with the assistance of OMB and the National Institute of Standards and Technology (NIST), has responsive actions under way to address the requirements of this section. In December 2003, the Director of OMB issued guidance on electronic authentication to assist agencies in determining their authentication needs for electronic transactions, including the use of electronic signatures. The guidance directs agencies to conduct e-authentication risk assessments on electronic transactions to ensure a consistent approach across government. As a follow-up to OMB’s guidance, in June 2004, NIST issued technical guidance on requirements for electronic transactions requiring authentication. OMB reported in its fiscal year 2003 e-government report to Congress that the activities of the e-Authentication initiative, managed by GSA, begin to meet the requirements of section 203 in establishing a framework to allow interoperability. The e-Authentication initiative is intended to minimize the burden on businesses, the public, and government when obtaining Internet services by providing a secure infrastructure for online transactions. The initiative is currently focused on setting a framework of policies and standards for agencies to use in procuring commercial products to meet their authentication needs. In July 2004, the initiative released documentation on its technical approach, which is based on an architectural framework that allows multiple protocols and federation schemes to be supported over time. The technical approach includes provisions for the use of electronic signatures when conducting electronic transactions. Section 204 of the E-Government Act requires the Director of OMB to work with the GSA Administrator to maintain and promote an integrated Internet-based system that provides the public with access to government information and services (see table 7). To the extent practicable, the federal Internet portal is to be designed and operated according to specific criteria; for example, the portal is to provide information and services directed to key groups (e.g., citizens, businesses, other governments), endeavor to make Internet-based services relevant to a given citizen activity available from a single point, integrate information according to function or topic, and consolidate access to federal information with Internet-based information and services provided by state, local, and tribal governments. GSA has taken responsive action to address the requirements of this section. As indicated in OMB’s fiscal year 2003 report to Congress, FirstGov.gov serves as the federal Internet portal prescribed under section 204. FirstGov.gov was launched in September 2000 as an interagency initiative, managed by GSA and supported and assisted by OMB and federal agencies. With this support and assistance, GSA established the portal to provide the public with access to U.S. government information and services, and GSA has maintained and promoted it since that time. The portal’s design and operation generally adhere to the criteria established by section 204. For example, one of the ways the portal organizes its content is by key group, including citizens, businesses, nonprofits, federal employees, and other governments (state, local, and tribal). FirstGov.gov also organizes content according to online services rather than organization; this allows the public to conduct business with the government via the Internet without having to know how the government is organized. According to the FirstGov.gov program manager, many citizens do not know what services are federal versus state or local, and so FirstGov.gov searches not only federal Web sites, but also state sites. In addition, through its browse feature, FirstGov.gov links to state, tribal, and local government home pages, as well as state services such as departments of motor vehicles and state lottery pages. Table 8 provides usage statistics for Firstgov.gov. GSA has several activities under way to promote the portal, including a nationwide television public service advertising campaign that began in June 2003 to educate citizens on how to find and use the information on FirstGov.gov. GSA officials estimate that the campaign has been used in 62 percent of the nation’s television markets. In June 2004, GSA’s Office of Citizen Services and Communications launched a public service advertising campaign to encourage citizens to take advantage of federal information and services through FirstGov.gov and 1-800-FED-INFO. The campaign includes a television public service announcement, prerecorded radio messages, and print advertisements for magazines and newspapers. Section 205 of the E-Government Act promotes public Internet access to federal court information. By April 2005, individual courts are required to establish and maintain Web sites to provide public access to specific types of information, such as location and contact information, court rules, case docket information, and opinions. In addition, the courts are required to make any documents filed electronically available to the public by April 2007. Privacy and security rules are to be established by the Supreme Court to protect electronically filed documents; however, the Judicial Conference may issue interim rules until the Supreme Court issues final rules. Finally, individual courts may defer compliance with the requirements of section 205 by submitting a notification to the Administrative Office of the Courts. Table 9 summarizes the actions required by this provision. The federal courts have made progress establishing individual Web sites for the circuit, district, appellate, and bankruptcy courts, as required by this section of the act. Officials from the Administrative Office of the Courts reported that as of June 2004, all 198 courts had established individual Web sites, 10 months before the April 2005 deadline. Court officials further reported that individual courts were making progress providing the information stipulated in the act on their Web sites. Court officials reported that as of August 2004, 128 of the 198 courts had provided public Internet access to their electronic filings, in advance of the April 2007 deadline. In addition, court officials reported that district and bankruptcy courts will provide public Internet access to electronic filings by September 2005, and appellate courts will provide such access by 2006. To address privacy and security concerns, in September 2003, the Judicial Conference adopted a policy permitting remote public access to criminal case file documents to be the same as public access at courthouses, with a requirement that filers remove personal data identifiers from documents filed electronically or on paper. While the act does not specify a deadline for the Supreme Court to issue final rules to protect privacy and security, federal court officials expect the Supreme Court to prescribe such rules by 2007. As required by the act, 1 year after the final rules take effect, and then every 2 years thereafter, the Judicial Conference will be responsible for submitting a report to Congress on the adequacy of the final rules to protect privacy and security. To date, no notifications deferring compliance with the requirements of section 205 have been submitted to the Administrative Office of the Courts. However, the Judicial Conference submitted a report to Congress, dated April 2, 2004, noting that because the statutory deadlines for the establishment of the individual courts’ Web sites and access to electronic filings (April 2005 and April 2007, respectively) have not passed, there are no notifications to report. Section 206 of the E-Government Act is aimed at enhancing public participation in government by electronic means and improving performance in the development and issuance of agency regulations through the use of information technology. This section, in part, calls for agencies, to the extent practicable, to accept submissions electronically (e.g., comments submitted on proposed rules) and to make electronic dockets—the full set of material related to a rule—publicly available online. The Director of OMB is charged with establishing a timetable for agencies to implement these requirements in its first annual report to Congress on implementation of the act. Table 10 summarizes the actions required by this provision. OMB and the Environmental Protection Agency (EPA) have actions under way to address the rulemaking requirements of this section. OMB designated the e-Rulemaking initiative, managed by EPA, as the vehicle for addressing these requirements of section 206. In January 2003, www.regulations.gov was launched, which enables citizens and businesses to search for and respond electronically to proposed rules open for comment in the Federal Register. The ability to search full rulemaking dockets—the complete set of publicly available material (i.e., economic analyses, models, etc.) associated with a proposed rule—is not yet available; its availability is contingent on the development of a governmentwide electronic docket system. In its fiscal year 2003 report to Congress, OMB established a goal of completing migrations to the common federal docket management system by September 2005, with agencies beginning migrations to the central system in September 2004. According to the e-Rulemaking director, this timetable is contingent on funding. The director stated that an operational version of the electronic docketing application would be ready by September 2005. Section 207 of the E-Government Act requires the Director of OMB to establish an Interagency Committee on Government Information (ICGI) to develop recommendations on the categorization of government information and public access to electronic information. The Director of OMB is to issue guidance for agency Web sites and establish a public domain directory of federal government Web sites. Further, OMB is required to ensure the development and maintenance of a governmentwide repository and Web site that integrates information about research and development funded by the federal government. The ICGI is to submit recommendations to the Director of OMB on policies to improve reporting and dissemination of information related to research performed by federal agencies and federally funded development centers. Table 11 summarizes the actions required by this provision. Although OMB and the ICGI have taken steps toward complying with many of the provisions of this section, no significant action has been taken on one of them. Among the steps toward compliance is OMB’s establishment of the ICGI on June 17, 2003; the committee consists of representatives from the National Archives and Records Administration, representatives of agency CIOs, and other relevant officers from the executive branch. The ICGI consists of an Executive Committee under the auspices of the CIO Council, as well as four working groups: Categorization of Government Information, Electronic Records Policy, Web Content Management, and E-Gov Act Access. The E-Gov Act Access working group was tasked with addressing the requirements of section 213 (community technology centers) and section 215 (disparities in access to the Internet). The Executive Committee is co-chaired by OIRA’s IPT Branch Chief and the Department of Commerce CIO. ICGI’s working groups have made progress toward meeting the deadlines for developing the various recommendations prescribed in section 207. In August 2004, the Categorization of Government Information working group published for public comment a recommendation for search interoperability, in preparation for the required December 2004 submission of recommendations to OMB. In June 2004, the Electronic Records Policy working group, tasked with developing recommendations on the application of the Federal Records Act to government information on the Internet and other electronic records, released a report on barriers to effective management of government information on the Internet and other electronic records. The Web Content Management working group is assisting OMB with its responsibilities to issue guidance on standards for agency Web sites and establish a public domain directory of federal government Web sites. In June 2004, this working group submitted a report to OMB on recommended policies and guidelines for federal public Web sites. As for establishment of the public domain directory and subject taxonomies, the working group intends to build on the existing directory and taxonomies of the federal Internet portal prescribed under section 204. OMB has not yet taken significant action to ensure the development and maintenance of a repository and Web site of information about research and development funded by the federal government, as required by the act. In its fiscal year 2003 report to Congress, OMB reported that an analysis had been conducted of the National Science Foundation’s Research and Development in the United States (RaDiUS) database system and that the system was closely aligned with the act’s requirements. However, OMB also said it had not yet determined whether RaDiUS would serve as the repository required by the act. Until OMB decides on a specific course of action, it may not fully meet the objective of section 207 to improve the methods by which government information, including information on the Internet, is organized, preserved, and made accessible to the public. According to the executive sponsor of the Web Content Standards working group, the ICGI has addressed the requirement to make recommendations on policies to improve reporting and dissemination of federal research results in its June 2004 report on recommended policies and guidelines for federal public Web sites. Section 208 of the E-Government Act is aimed at ensuring sufficient protection for the privacy of personal information as agencies implement electronic government. Section 208 requires the agencies to prepare a privacy impact assessment (PIA), which is an analysis of how information is handled in order to determine risks and examine protections for systems that collect information in a personally identifiable form (that is, information that could identify a particular person). Also, the act requires the Director of OMB to develop and issue guidance for completing the PIA. In addition, the Director of OMB is to develop guidance for privacy notices on agency Web sites accessed by the public. Finally, section 208 states that the Director of OMB is to issue guidance requiring agencies to translate privacy policies into a standardized machine-readable format. Table 12 summarizes the deliverables required by this provision. OMB has taken responsive action to address the requirements of this section. In September 2003, OMB issued guidance on implementing the privacy provisions of Section 208 that included requirements for PIAs as well as privacy policies for Web sites. OMB requires that agencies report compliance with the PIA and Web site privacy policy requirements in their agency-specific annual e-gov reports. In addition, OMB has built privacy compliance requirements into the budget process by requiring agencies to conduct a PIA for each major information technology system. Other efforts made by OMB to oversee agency PIA development include speaking engagements, agency-specific meetings, and workshops. Rules for agency Web site privacy policies including notices were also outlined in OMB’s privacy implementation guidance and took effect on December 15, 2003. Finally, the guidance document included requirements for translating Web site privacy policies into standardized machine-readable format. Section 209 of the E-Government Act requires the Office of Personnel Management (OPM), in consultation with OMB, GSA, and the CIO Council, to conduct activities aimed at improving the skills of the federal IT workforce. OPM is required to develop governmentwide policies so that executive agencies can promote the development of performance standards for training as well as uniform implementation of workforce development requirements. OPM is also required to submit a report to Congress on the establishment of an IT training program. Additionally, OPM is required to establish procedures for administration of an IT Exchange Program, report to Congress on existing IT Exchange Programs, and submit biennial reports to Congress on the operation of such programs. Table 13 summarizes the actions required by this provision. OPM, GSA, and the CIO Council all have efforts under way in IT workforce development that address the requirements of this section of the act. These efforts include baseline activities such as surveying the personnel needs of the federal government related to IT as well as information resources management. In a June 2004 report, we highlighted that the CIO Council’s Workforce and Human Capital for IT Committee, in consultation with OPM and OMB, developed the Clinger-Cohen Assessment (CCA) survey. This survey was conducted via the Internet in September 2003 to collect information regarding federal employee IT competencies, skills, certifications, and specialized job activities. The data collected by the CCA survey provided agencies with an “as is” IT workforce baseline for use in developing IT training programs that would close the gap between the current and necessary federal IT skills. OPM officials reported that the survey would be performed every year to give agencies a measure of their progress in closing skills gaps. As we reported in June 2004, OPM has not yet issued policies that encourage the executive agencies to promote the development of performance standards for workforce training. However, OPM has established milestones for the development and issuance of such policies and estimates that guidance will be communicated via the CIO Council and OPM’s Human Capital Officers in November 2004. In August 2004, OPM issued its report on the establishment of a governmentwide IT training program. The report establishes an IT framework based on the Clinger-Cohen “Core Competencies” developed by the CIO Council. The E-Government Act was enacted on December 17, 2002, leaving OPM approximately 2 weeks to prepare the required report. Consequently, OPM officials instead provided an interim report to Congress in June 2003 that provided a descriptive view of existing governmentwide IT training programs, noting that a more comprehensive report would be provided at a later date. In January 2004, OPM published a proposed rule in the Federal Register on the establishment of an IT Exchange Program. OPM officials reported that they reviewed public comments and drafted a final rule but could not give an estimate as to when the final rule would be published. As required by the act, OPM provided Congress with a report on existing exchange programs in December 2003. In addition, OPM submitted reports to Congress in April 2003 and April 2004, both of which stated that the IT Exchange Program had not yet been established. Section 211 of the E-Government Act provides for the use of Federal Supply Schedules by state and local governments for the acquisition of IT. The GSA Administrator is charged with establishing procedures to govern the use of Federal Supply Schedules by state and local governments. The E-Government Administrator is required to report to Congress on the implementation and effects of state and local government use of these schedules. Table 14 summarizes the actions required by this provision. GSA has taken responsive action to address the requirements of this section. On May 18, 2004, GSA issued its final rule authorizing acquisition of IT by state and local governments through Federal Supply Schedules. Although the act required that the procedures be issued by January 17, 2003, GSA did not finalize the new procedures until May 2004. The agency had issued a proposed rule to implement the procedures on January 23, 2003, and an interim rule on May 7, 2003. GSA officials noted that the use of these schedules on the part of vendors as well as state and local governments is voluntary. The deadline for the required implementation report has not yet passed; OMB officials reported that they plan to report to Congress in December 2004. Section 212 of the E-Government Act requires the Director of OMB to oversee a study and report to Congress on progress toward integrating federal information systems across agencies. In addition, in order to provide input to the study, the Director of OMB is required to designate up to five pilot projects to encourage integrated collection and management of data and interoperability of federal information systems. Table 15 summarizes the actions required by this provision. OMB has actions under way to address the requirements of this section. In March 2004, OMB announced the launch of a task force to examine five government lines of business: case management, federal health architecture, grants management, human resource management, and financial management. OMB officials stated that the lines of business initiatives also serve as the pilot projects required under section 212. Similar to the management of the 25 e-government initiatives, the lines of business initiatives are to be led by agencies designated as managing partners. The managing partners for all five initiatives are to submit business cases in September 2004 for the fiscal year 2006 budget cycle. OMB officials also reported that the study they are required to conduct under section 212 is ongoing; the deadline for this report has not yet passed. OMB officials stated that their study will address the lines of business initiatives, as well as the Federal Enterprise Architecture. OMB officials said they plan to report on the results of the study via the annual E-Government Act implementation report to Congress. Section 213 of the E-Government Act requires the Administrator of the Office of E-Government to ensure that a study is conducted to evaluate the best practices of community technology centers, which provide Internet access to the public, and submit a report to Congress on the findings of this study by April 2005. In addition, this section requires the E-Government Administrator, in consultation with other agencies, to develop an online tutorial that explains how to access government information and services on the Internet. Table 16 summarizes the actions required by this provision. OMB and other agencies have actions under way to address the requirements of this section of the act. According to a GSA official, OMB assigned the responsibility for section 213 to a newly created E-Gov Act Access working group established under the Interagency Committee on Government Information. The E-Gov Act Access working group consists of a cross section of agencies with an interest in access issues and includes representation from agencies such as the Department of Education, the Government Printing Office, and the Department of Housing and Urban Development. According to the working group’s co-chair, the group plans to meet the April 2005 statutory deadline for the required study evaluating the best practices of community technology centers. Additionally, the group plans to consider options for developing an online tutorial in December 2004. Section 214 of the E-Government Act addresses the coordination and availability of information across multiple access channels and improving the use of IT in disaster preparedness, response, and recovery. A study is required to evaluate the use of IT for the enhancement of crisis preparedness, response, and consequence management of natural and manmade disasters. Also required is a report to Congress on the findings of the study as well as recommendations. Finally, the Administrator of the Office of E-Government is to initiate pilot projects in cooperation with the Federal Emergency Management Agency (FEMA) or report other activities to Congress that involve maximizing the use of IT in disaster management. Table 17 summarizes the actions required by this provision. OMB and the Department of Homeland Security (DHS) have not yet taken actions that are fully responsive to the requirements of this section of the act. A study provided by DHS officials to address enhancement of crisis and response did not contain the required contents as stipulated in section 214. The study was conducted by the MITRE Corporation for Project SAFECOM in December 2002 and completed in March 2003. DHS officials stated that the study addresses the section 214 requirement to conduct a study on enhancement of crisis response. However, our analysis indicates that the study in general did not address the use of IT to enhance crisis preparedness, response, and consequence management of natural and man-made disasters, as required by section 214. Specifically, the study did not include a research and implementation strategy for effective use of IT in crisis response and consequence management. The act states that this strategy should include the more effective use of technologies; management of IT research initiatives; and incorporation of research advances into the information communication systems of FEMA and other federal, state, and local agencies responsible for crisis preparedness, response, and consequence management. Furthermore, the study did not discuss opportunities for research and development on enhanced technologies for potential improvement as determined during the course of the study. OMB officials agreed that the study conducted by Project SAFECOM did not address the requirements of the act. OMB officials stated that a new study would be conducted to meet these requirements. Until the required elements of the study are addressed, OMB may not be able to make a fully informed response to the act’s requirement that it report on findings and recommendations for improving the use of IT in coordinating and facilitating information on disaster preparedness, response, and recovery. According to OMB officials, pilot projects expected to enhance the goal of maximizing the use of IT are not planned. Instead, the focus of OMB’s efforts has been on other activities, such as the Disaster Management and SAFECOM programs, which work with industry communities to improve the requirements and develop standards for information sharing and coordination of responsiveness. OMB officials stated that they would determine at a future time whether initiation of pilot projects is necessary. Section 215 of the E-Government Act requires the GSA Administrator to contract with the National Academy of Sciences (NAS) to conduct a study on disparities in Internet access for online government services. GSA is to submit a report to Congress on the findings, conclusions, and recommendations of the study by December 2004. The report is required to address (1) how disparities in Internet access influence the effectiveness of online government services, (2) how the increase in online government services is influencing the disparities in Internet access and how technology development or diffusion trends may offset such adverse influences, and (3) related societal effects arising from the interplay of disparities in Internet access and the increase in online government services. Table 18 summarizes the actions required by this provision. GSA has not fully addressed the requirements of this section, because it has not yet commissioned the required NAS study on disparities in Internet access for online government services. Although the act authorizes $950,000 to be spent on the study and report, a GSA official stated that no money had yet been appropriated. GSA officials reported that they were unable to request funds as part of the fiscal year 2003 or 2004 budget cycles because the act passed in December 2002, after fiscal year 2003 had begun and the deadline for fiscal year 2004 agency budget submissions (August 2002) had passed. Although GSA officials did not provide any information regarding their actions for fiscal year 2005, they reported that the agency had requested the funds authorized in the act for the fiscal year 2006 budget cycle. Pending appropriation of the requested funds, GSA plans to enter into a contract with NAS for the study, but notes that the report on the study will not be completed within the statutory deadline of December 2004. In keeping with the purpose of this section, GSA officials and the Interagency Committee on Government Information’s E-Gov Act Access working group are working on compiling an interim study based on existing research on disparities in access to the Internet. The existing research includes, for example, Hart-Teeter poll results and Pew Internet and American Life Project studies. This compilation report is expected to be completed by December 2004 and submitted to Congress in OMB’s annual report to Congress on the implementation status of the act. The purpose of section 216 of the E-Government Act is to reduce redundant data collection and information and promote collaboration and use of standards for government geographic information (see table 19). An interagency group is to establish common protocols that maximize the degree to which unclassified geographic information from various sources can be made electronically compatible and accessible, as well as promote the development of interoperable geographic information systems technologies. A variety of actions are under way to address the requirements of this section of the act. According to OMB, the interagency group referred to in the act is the Federal Geographic Data Committee (FGDC), which was organized in 1990 under OMB Circular A-16. The FGDC is intended to promote the coordinated use, sharing, and dissemination of geospatial data on a national basis. The FGDC is chaired by the Secretary of the Department of Interior, with the Deputy Director for Management at OMB serving as Vice-Chair, and is made up of representatives from 19 cabinet- level and independent federal agencies. OMB also established the Geospatial One-Stop initiative in 2002 to facilitate the development of common protocols for geographic information systems by bringing together various stakeholders to coordinate effective and efficient ways to align geographic information. In addition, the purpose of the Geospatial One-Stop is to make it faster, easier, and less expensive for all levels of government to obtain necessary geospatial data in order to make programmatic decisions. Actions taken by FGDC to promote collaboration include creating a standards working group made up of federal and state agencies, academia, and the private sector. The working group has developed, and FGDC has endorsed, a number of different geospatial standards, including metadata standards, and it is currently developing additional standards. The committee’s working group also coordinates with national and international standards bodies to ensure that potential users support its work.
The E-Government Act (E-Gov Act) of 2002 was enacted with the general purpose of promoting better use of the Internet and other information technologies to improve government services for citizens, internal government operations, and opportunities for citizen participation in government. Among other things, the act specifically requires the establishment of the Office of Electronic Government within the Office of Management and Budget (OMB) to oversee implementation of the act's provisions and mandates a number of specific actions, such as the establishment of interagency committees, completion of several studies, submission of reports with recommendations, issuance of a variety of guidance documents, establishment of new policies, and initiation of pilot projects. Further, the act requires federal agencies to take a number of actions, such as conducting privacy impact assessments, providing public access to agency information, and allowing for electronic access to rulemaking proceedings. OMB has linked several of the act's provisions to ongoing e-government initiatives that it has sponsored. While some deadlines specified in the act have passed, many required actions do not have statutory deadlines or have deadlines that have not yet passed. This report responds to a Congressional request that we review the implementation status of major provisions from Titles I and II of the E-Gov Act. In most cases, OMB and federal agencies have taken positive steps toward implementing the provisions of Titles I and II of the E-Gov Act. For example, OMB established the Office of E-Government, designated its Assistant Director for Information Technology (IT) and E-Government as the office's Administrator in April 2003, and published guidance to federal agencies on implementing the act in August 2003. In most cases, OMB and federal agencies have taken action to address the act's requirements within stipulated time frames. For example, OMB established the Interagency Committee on Government Information in June 2003, within the deadline prescribed by the act. The committee is to develop recommendations on the categorization of government information and public access to electronic information. Even when deadlines have not yet passed, in all but one case OMB and agencies have taken action to implement the act. For example, federal courts have established informational Web sites in advance of the April 2005 deadline specified by the act, and court officials are taking steps to ensure that the Web sites fully meet the criteria stipulated by the act. Similarly, in most cases where deadlines are not specified, OMB and federal agencies have either fully implemented the provisions or demonstrated positive action toward implementation. For example, in May 2003, the E-Government Administrator issued a memorandum detailing procedures for requesting funds from the E-Government Fund, although the act did not specify a deadline for this action. Although the government has made progress in implementing the act, the act's requirements have not always been fully addressed. Specifically, OMB has not ensured that a study on using IT to enhance crisis preparedness and response has been conducted that addresses the content specified by the act, established a required program to encourage contractor innovation and excellence in facilitating the development and enhancement of electronic government services and processes, or ensured the development and maintenance of a required repository and Web site of information about research and development funded by the federal government. Further, GSA has not contracted with the National Academy of Sciences (NAS) to conduct a required study on disparities in Internet access for online government services. In the first three cases, OMB has either taken actions that are related to the act's provisions but do not fully address them (in the first and second cases) or has not yet made key decisions that would allow actions to take place (in the third case). In the last case, GSA is seeking funding for the required study in fiscal year 2006. Until these issues are addressed, the government may be at risk of not fully achieving the objective of the E-Government Act to promote better use of the Internet and other information technologies to improve government services and enhance opportunities for citizen participation in government.
Consistent with the national trend toward managed care, the number of Medicare beneficiaries enrolled in HMOs has grown significantly—from about 1 million in 1987 to about 4 million in 1996. This growth represents an increase from about 3 percent of all Medicare beneficiaries to about 10 percent. About 90 percent of Medicare beneficiaries enrolled in managed care are in risk-contract HMOs. The largest growth in Medicare managed care enrollment has occurred in the risk program. (See fig. 1.) The number of HMOs in the risk program fluctuated somewhat in the program’s first 5 years, but since 1992 the number of risk HMOs has grown steadily. (See fig. 2.) As of November 1996, HCFA had entered into 238 risk contracts. Most beneficiaries have at least one risk HMO available in their area, and, in some markets, beneficiaries can choose from as many as 14 different HMOs. Risk HMOs are required to offer at least one 30-day enrollment period each year, but, in practice, most accept enrollment continuously. Although HCFA provides beneficiaries some general information about HMOs when beneficiaries first become eligible for Medicare, they typically learn about their options from the HMOs. Unlike leading private and public health care purchasing organizations, Medicare does not provide its beneficiaries with comparative information about available HMOs. HMOs provide beneficiaries with enrollment forms, collect the forms, and notify HCFA of enrollments. Beneficiaries may disenroll from a plan as often as once each month. As discussed in the BBA conference report, the BBA included provisions that would have amended Medicare’s enrollment policy in the following ways: Each October, Medicare would have an annual, coordinated election period, or “open season,” during which beneficiaries could change their Medicare election. Elections of coverage would become effective the following January 1. However, newly eligible Medicare beneficiaries could elect coverage and have their choice become effective when they first became eligible for benefits. The Secretary of the Department of Health and Human Services (HHS) would conduct a nationally coordinated educational and publicity campaign during October. At least 15 days before the election period, the Secretary would mail all Medicare beneficiaries and prospective beneficiaries general election information and information comparing benefits, premiums, and measures of quality at available health plans. Disenrollment could occur only within 90 days of the time elected coverage began. Beneficiaries who disenrolled could elect a different HMO for the remainder of the year. This disenrollment option would only apply the first time a beneficiary enrolled in a particular managed care plan and would not apply more than twice for any beneficiary in a calendar year. Exceptions would include disenrollment for beneficiaries who moved out of a service area. Establishing a limited enrollment period could slow managed care growth for two reasons. First, marketing practices possible under a limited enrollment policy might be less effective in attracting beneficiaries to managed care. These changes could have a positive by-product, however, as the incidence of in-home sales and associated abusive sales practices would likely diminish. Second, restrictions on health plan switches outside the established enrollment period—even if no restrictions existed on changing to traditional fee-for-service Medicare—could deter some beneficiaries from enrolling in HMOs. In particular, a limited enrollment period policy would have three main disadvantages for beneficiaries: (1) dissatisfied beneficiaries and those encountering problems gaining access to desired treatments could be exposed to higher health care expenses, (2) beneficiaries who spend part of each year in a different location (“snowbirds”) could find they had no choice other than fee for service, and (3) all beneficiaries enrolled in HMOs could face delays in obtaining physician appointments at the start of each benefit year because of a large volume of new beneficiaries seeking services. One-on-one sales presentations, often conducted in the privacy of beneficiaries’ homes, leave beneficiaries vulnerable to abusive sales tactics and serious marketing problems. Reported abuses include HMO representatives’ lying to prospective enrollees about the benefits of HMO enrollment, pressuring beneficiaries to join HMOs, enrolling beneficiaries who could not make informed enrollment decisions, and obtaining enrollment signatures under false pretenses. Although HCFA cannot determine the frequency of these problems, agency officials are concerned about the potential for in-home sales marketing abuses. According to our HMO survey results, about half of the beneficiaries who enrolled in a Medicare HMO as individuals (not as members of an employer group) in 1995 participated in a one-on-one sales presentation. However, the likelihood of a beneficiary’s participating in a one-on-one sales presentation varied greatly by HMO. A limited enrollment period lasting just 1 or 2 months each year could make it impractical for HMOs to conduct as many in-home sales presentations. Each one-on-one meeting can last from 1/2 hour to 2 hours and is conducted by an HMO sales agent who sells only to Medicare beneficiaries. HMO representatives told us that sales agents who sell Medicare plans sell them exclusively. Agents are trained not only in the details of their HMO’s offering, but also in traditional Medicare and the rules governing Medicare managed care. Some HMO representatives implied that maintaining a large, dedicated Medicare sales force year-round would be impractical if most sales would take place during a 1- or 2-month limited enrollment period. Furthermore, HMO representatives said it would be unrealistic to expect non-Medicare agents to be able to sell Medicare products. Because beneficiaries are particularly susceptible to abusive sales practices in their homes, reducing or eliminating in-home sales presentations would better protect beneficiaries from the possibility of sales abuses. This protection, however, would be a by-product of the enrollment policy change and could be achieved by more direct methods. Under the BBA, before the start of a limited enrollment period, the Secretary of HHS would have been responsible for producing and distributing (1) a list of plans available in a given area and (2) comparative information about those plans, including benefits, premiums, and measures of quality. The Secretary would also have been responsible for maintaining a toll-free number that beneficiaries could call to receive specific information. Beneficiaries’ ability to make informed health care choices would be enhanced by the availability of objective, comparative information and access to a hot line. We recently reported that beneficiaries who wish to compare plans face difficult, if not daunting, steps. First, they must call a toll-free telephone number to obtain a list of HMOs available in their area. Next, they must contact those HMOs and request marketing brochures. Finally, they must compare plans’ benefit packages and cost information described in the brochures. The last step can be difficult because HMOs are not required to use standard formats or terminology in describing their products. A limited enrollment period would facilitate an annual HMO marketing campaign and create a natural opportunity for HCFA to distribute comparative plan information to beneficiaries. Some experts believe that HMOs’ concentrated advertising during the open season would help inform beneficiaries of alternative Medicare options. Another potential advantage is that any comparative information produced by HCFA would be up to date at the time most beneficiaries were making health care choices. HMO representatives told us that if Medicare established a limited enrollment period, plans would turn to a marketing approach more conducive to a limited enrollment time frame. HMOs would focus more of their marketing dollars on mass media campaigns—including print, radio, and television advertising—concentrated around Medicare’s enrollment season. Some experts believe that a concentrated mass marketing campaign could increase beneficiary awareness of Medicare options, including managed care. These experts suggest that the Medicare advertising blitz could be similar to the advertising campaigns that occur in the Washington, D.C., area during the Federal Employees’ Health Benefits Program (FEHBP) open season each fall. Whether Medicare HMOs’ advertising campaigns would be as intense as FEHBP plans’ is uncertain. FEHBP subscribers represent about 9 percent of the Washington, D.C., metropolitan area’s total population. Nationwide, Medicare beneficiaries represent about 14 percent of the total population. However, only about 1 in 10 Medicare beneficiaries currently enrolls in managed care. If advertising intensity is driven by the proportion of potential customers, the intensity of a campaign for Medicare beneficiaries would depend upon whether HMOs believe the potential market is all Medicare beneficiaries or only 1 in 10. Representatives of HMOs, however, believe that an advertising campaign without the benefit of one-on-one sales would be less effective at convincing Medicare beneficiaries to try managed care. Representatives of most HMOs we contacted stated that limiting Medicare’s enrollment period would slow the growth of managed care because plans would not (1) have time to educate beneficiaries about Medicare’s managed care option and (2) be able to hire enough trained sales staff on a seasonal basis to answer beneficiary questions during the limited enrollment period. Although abuses have been reported in conjunction with one-on-one sales, HMOs believe this sales approach is both necessary and effective, in part because many beneficiaries have had no experience with managed care. The effectiveness of an FEHBP-like mass marketing campaign for Medicare may depend on whether HCFA develops ancillary mechanisms to inform beneficiaries. Participants in FEHBP do not rely exclusively on mass marketing to obtain information. All active and retired FEHBP enrollees are given comparative information on available plans and can obtain detailed, plan-specific information brochures that follow a standard format. Active federal workers can also discuss their health care options with colleagues or their agency’s benefits administrator. Furthermore, most workers can easily attend health fairs sponsored by their agency, where health plan representatives distribute literature and answer questions. The 20 percent of FEHBP members who are retired also have some advantages over individuals in Medicare. As former federal workers, FEHBP participants are familiar with the program’s enrollment and disenrollment rules. In addition, federal retirees receive guidance from the National Association of Retired Federal Employees. This organization, with over 1,700 chapters nationwide, works closely with FEHBP in answering questions and resolving problems. Finally, some members of the Congress sponsor annual FEHBP health fairs attended by retirees. Requiring third-party contractors, or brokers, to conduct all enrollment activities would better protect beneficiaries from abusive sales practices, minimize the opportunity for HMOs to favorably select only the healthiest beneficiaries, and provide beneficiaries a convenient source of objective information. Beneficiaries might welcome such a change in enrollment practices partly because they would have the convenience of “one-stop shopping” and also appreciate a source of objective, comparative information. A recent focus group conducted for HCFA found that most beneficiaries did not view insurance plan representatives as trustworthy sources of impartial information. Nonetheless, HMO representatives maintain that personal contact with an HMO sales agent can be reassuring to beneficiaries and that industry sales abuses are few. HCFA plans to test the effect of third-party enrollment contractors in a future Medicare demonstration project. Scheduled to begin sometime in 1997, this project will use a third-party contractor to conduct marketing, education, counseling, and enrollment activities. HCFA’s design—as of August 1996—will permit HMOs to provide information to beneficiaries directly and even help beneficiaries fill out enrollment forms. The third-party contractor will provide comparative information about the plans, counsel beneficiaries who want to consult with a neutral party, and perform all enrollment transactions. The potential effect of this approach on enrollment is not clear, and the demonstration’s effects may not be fully evaluated for years. If the Medicare program relies solely on enrollment brokers and prohibits HMOs from marketing to individual beneficiaries, however, growth of Medicare managed care might slow. HMO representatives with whom we discussed this issue were concerned that brokers would be less knowledgeable about the advantages of specific plans and thus not as effective as sales agents in selling managed care to Medicare beneficiaries. Recent experience in the Medicaid program suggests that prohibiting direct marketing by HMOs could slow enrollment growth. Because of abuses, Florida and New York prohibited HMOs from marketing to beneficiaries directly. Both states experienced significant declines in Medicaid HMO enrollment. Florida reported that, in a recent 3-month period since banning direct marketing, enrollment levels fell by an average of 10,000 enrollees per month. New York temporarily suspended its ban on direct marketing to help increase HMO enrollment but implemented other steps to prevent HMO marketing abuses. In fact, in many Medicaid programs in which beneficiary participation in managed care is voluntary, states rely on HMOs to inform beneficiaries about managed care and encourage them to enroll. Although a limited enrollment period could add some consumer protections for beneficiaries, it could expose dissatisfied beneficiaries to additional out-of-pocket costs. Under the limited enrollment period policy discussed here, beneficiaries dissatisfied with their HMOs would have three choices: (1) remain in the HMO, (2) switch to traditional Medicare fee for service and pay the deductible and coinsurance for submitted claims, or (3) switch to traditional Medicare fee for service and purchase a Medigap policy if one was available to them. Beneficiaries dissatisfied with access to desired treatments could remain in their HMO and purchase those services privately. However, going outside the HMO for treatment or changing to fee for service would cost most beneficiaries more money than they would have spent had they been able to enroll in another HMO. Changing to traditional fee for service could be an expensive option for many dissatisfied Medicare HMO members. HMOs are cheaper than fee for service for many Medicare beneficiaries because 65 percent of HMOs do not charge a monthly premium (so-called “zero premium HMOs”). In addition, HMOs frequently offer benefits, such as outpatient prescription drugs, that are not provided by traditional Medicare. Beneficiaries in HMOs are responsible for copayments for certain services but often fewer services than in a fee-for-service arrangement. Beneficiaries in fee for service who need services covered under Medicare part B must fulfill a deductible and pay a portion of additional expenses. Dissatisfied HMO members who change to fee for service may want to purchase supplemental health insurance, known as Medigap, to help cover out-of-pocket costs. However, Medigap policies can cost over $1,000 per year—more than most beneficiaries would pay to an HMO. Furthermore, beneficiaries have no guarantee that a Medigap policy will be available upon disenrolling from an HMO. During the 6 months after a person turns age 65 and enrolls in Medicare part B, federal law guarantees beneficiaries the opportunity to purchase a Medigap policy. After that, Medigap insurers are permitted to refuse to sell policies because of an applicant’s health history or status. We recently reported that, although some insurers do exercise their option to refuse coverage, all beneficiaries currently have at least one Medigap policy available to them after the 6-month guarantee period, regardless of their health history or status. Nevertheless, no federal requirement exists to ensure that beneficiaries will always have such alternatives. Beneficiaries who temporarily relocate for the winter, commonly known as “snowbirds,” might find joining a Medicare HMO impractical and would probably choose the fee-for-service option instead. HMOs are required to provide emergency, but not routine, care to members outside the HMO service area. Furthermore, HMOs are required to disenroll any member who leaves his or her HMO’s service area for more than 90 days. Currently, snowbirds can disenroll from an HMO and switch to fee for service or another HMO each time they relocate. If a limited enrollment period policy prohibited such plan switching, snowbirds would be left with only one realistic option—enrolling in Medicare’s fee-for-service program. Although data are not available on the number of Medicare snowbirds, their existence is widely recognized. HMOs might respond to a limited enrollment period policy by offering flexible service arrangements not commonly available today, such as reciprocal agreements and point-of-service options, partly to attract snowbirds. Reciprocal agreements among health plans—which permit HMO members traveling outside their plan service area to receive routine care and nonemergency services from another HMO—would make temporary relocations less problematic for beneficiaries who wished to enroll in managed care. Several HMOs now offer reciprocity but only within their own companies or affiliates. For example, a member of the Kaiser Foundation Health Plan in Los Angeles may receive services from Kaiser HMOs in other parts of the country. A representative of the American Association of Retired Persons said her organization is interested in encouraging the development of reciprocal agreements among plans, although no such agreements currently exist. Similarly, if many HMOs offer the point-of-service option—a hybrid of HMOs and fee-for-service plans—a Medicare policy limiting plan switching would be less of a deterrent to snowbirds who wished to enroll in HMOs. Most of the HMOs we contacted believe that a limited enrollment period would cause beneficiaries to face delays in receiving health care services at the beginning of each health benefit year. HMO representatives said a heavy demand for services would be caused by new Medicare members’ “trying out” their new physicians soon after enrolling. One HMO told us that a large percentage of that HMO’s new members see their primary care physician within 60 days of enrolling to receive health care or renew a prescription. In fact, some plans strongly suggest that new members undergo initial health assessments within 30 days of joining. Although demand for provider services also increases after the start of a commercial contract, the effect of an influx of new Medicare members is greater because Medicare beneficiaries tend to use physician services more frequently than younger HMO members. Beneficiaries who would likely face delays in scheduling physician office visits might be those who join HMOs that employ providers directly (“staff model” HMOs) or have exclusive contracts with providers (“captive group model” HMOs) or those who join HMOs with relatively small provider networks. Beneficiaries who join HMOs with exclusive provider arrangements will, by definition, change providers when changing plans. New members in HMOs with small provider networks are more likely to need to select a new provider than beneficiaries joining plans with large networks. However, for some beneficiaries, joining an HMO or switching among plans will not require switching physicians and an introductory visit because physicians often contract with multiple HMOs. Obtaining appointments at the start of each health benefit year might be difficult for beneficiaries in some HMOs because a limited enrollment period policy would probably result in dramatic, once-a-year membership spikes. From December 1994 to December 1995, 24 plans enrolled more than 10,000 new members, including 1 that enrolled close to 55,000 members. (Table 1 shows the distribution of new members among plans.) These membership increases, however, were absorbed by the plans over 12 months, not during a single month, as might occur under a limited enrollment period policy. The annual enrollment change resulting from a limited enrollment period could be difficult for HMOs to predict accurately; any unanticipated HMO enrollment growth could contribute to provider access problems. Representatives of one large HMO described what happened when they grossly underestimated the response to their Medicare product in a new market area. Although the plan had contracted with a large number of physicians, it underestimated the need for primary care physicians and certain specialists. Demands on plan physicians’ time and the level of beneficiary complaints were so high that some physicians quit. The plan contracted with new physicians (a process that took about 6 months) and cut back its marketing efforts to hold down additional enrollment, but 1-1/2 years passed before the plan’s provider network could comfortably meet members’ demand for services. A January start date for the Medicare benefits year, as specified in the BBA, could cause longer delays in receiving health services than if another time of year was selected. January is already a particularly busy month for providers because so many members of employer-based health plans begin their benefits years on January 1. Furthermore, according to HMO representatives, demand for physician office visits is already high in January because of winter respiratory illnesses. However, choosing a month other than January could increase the number of employers that are inconvenienced, as discussed in the next section. Limiting Medicare’s enrollment period would create varying degrees of administrative problems for employers and could, as a result, discourage some employers from offering managed care to their retirees. Our survey results indicated that in January 1996 about 21 percent of all beneficiaries in Medicare risk HMOs enrolled through employer groups. Moreover, between January 1995 and January 1996, the number of Medicare beneficiaries in HMOs sponsored by employer groups grew by 17.5 percent. The number of Medicare beneficiaries individually enrolled in HMOs grew even more—by 36.2 percent. (See fig. 3.) Almost all employer groups offering coverage through the risk HMOs we surveyed limit the period during which members can enroll, but not all these groups choose the same times of year to enroll members and to begin benefits. Under a limited enrollment policy, unless exempted from complying with Medicare’s specific enrollment period and effective date, some proportion of employers would need to shift their health benefits calendar. The BBA proposed an October enrollment period with Medicare beneficiaries’ choices effective January 1. This timing would have coincided with the dates used by 62 percent of the employers offering managed care to retirees in 1995. (See fig. 4.) If legislation mandates a specific health benefits open season for all Medicare beneficiaries, it is unlikely that employers with different benefit seasons would all respond in the same manner. Rather, these employers could take one of three courses: (1) shift all employees’ and retirees’ benefits seasons and run a single season that would coincide with Medicare’s season, (2) shift seasons for Medicare retirees only and run one season for retirees and another for active employees, or (3) choose not to offer the Medicare risk program to retirees. Some employers could face problems shifting their benefits season to coincide with Medicare’s. Employers and benefit consulting firms we contacted discussed two major reasons why nearly 4 of 10 employers have their group coverage begin in a month other than January. First, employers often select a benefits year that coincides with the start of their fiscal year, which may not be January. Second, employers with seasonal businesses often choose slow business months to conduct an enrollment process. For example, representatives of a major benefits consulting firm and several national retailers told us that because the winter holiday season is the busiest and most demanding time of year for retailers, these employers try to avoid other activity at that time. One of the health benefits consultants we contacted said that his firm had tried unsuccessfully to get some of its clients to begin their coverage in a month other than January to ease the firm’s administrative burden. To comply with a mandated health benefits season for Medicare, some employers might choose to run two seasons—one for retirees and one for active workers. One business group told us that some employers already run two separate seasons because retirees tend to take more time and ask more questions of health benefits personnel than do active employees. However, executives of one national health benefits consulting firm also said that running two separate seasons costs employers more money than running a single season. Executives of one large national retailer anticipated that running two health seasons would create serious administrative problems. The retailer would have to (1) untangle its contracts with HMOs so that coverage for Medicare-eligible retirees could be separated from coverage for active employees, retirees, and retirees’ dependents under age 65; (2) renegotiate contracts with plans; and (3) revise internal policies and communications. Executives said untangling contracts could take 2 to 3 years to complete. They further noted that if they ran two seasons, members of the same family could find themselves with different health benefit years. Because of all these problems, the executives said they probably would not offer Medicare risk plans if they had to change benefit years. They further predicted that other employers whose benefits seasons would not coincide with Medicare’s would do the same. Employers who were willing to switch their health benefits season would probably need 9 months to 1 year of planning time to make the transition, according to representatives of employers and benefit consulting firms. For example, one retailer we contacted had been operating a single season for employees and retirees with a benefits year beginning at the start of its fiscal year on February 1. This company recently shifted the start of its benefits year for its active employees because the February health benefits year required a November or December enrollment period, which interfered with holiday business. The retailer started actively planning 1 year before the change. It encountered some administrative difficulties but found that making the change was relatively inexpensive. The California Public Employees’ Retirement System (CalPERS) also recently shifted its health benefits season for both employees and retirees. Before this change, benefits became effective on August 1; now benefits are effective January 1. CalPERS changed its season to coordinate with preferred provider organizations and other state benefit programs that operate on a calendar year. CalPERS found the process of shifting its health benefits cycle manageable and not very costly but did need about 15 months to prepare for the change. If a new enrollment policy also limited HMO members’ opportunities to disenroll and change to fee for service, the Medicare program might save some money; however, the policy could also result in reduced beneficiary protections, increased beneficiary dissatisfaction, and slower HMO growth. Limiting opportunities for beneficiaries to disenroll from HMOs mid-year might generate some cost savings for Medicare. These savings would occur because payments to HMOs are based on our assumption that HMO enrollees’ health and medical requirements are the same as those of the average beneficiary in fee for service. However, beneficiaries who leave managed care plans and switch to a fee-for-service arrangement are not average—they tend to use more services and incur higher costs than the average fee-for-service beneficiary. Nonetheless, our analysis indicates that Medicare’s maximum potential savings from limiting disenrollment might be small, relative to overall program expenditures, because few managed care enrollees change to fee for service. To quantify potential savings, we studied the behavior of all 738,000 California Medicare beneficiaries who were enrolled in a risk HMO at the start of 1994. Of the beneficiaries who did not change residences, only 15,772 switched from managed care to fee for service during 1994.Medicare paid fee-for-service claims for 11,382 of these beneficiaries, amounting to almost $73 million. If these beneficiaries had not been allowed to disenroll from their plans, the Medicare program would have paid $42 million in capitated payments to HMOs to cover these same beneficiaries. Thus, the potential savings of limiting disenrollment would have been, at most, $31 million in California during 1994—compared with total Medicare risk HMO expenditures in California of $4.2 billion. Potential savings, as a percentage of payments to HMOs, may be slightly higher in states other than California. Beneficiaries in California have many HMOs from which to choose and can readily join a competing HMO if dissatisfied with their own. In other states, however, beneficiaries have fewer choices, and the rate of changing to fee for service among dissatisfied beneficiaries may be higher than in California. Since limiting the opportunity to change to fee for service during the year produces cost savings, the potential savings, as a percentage of payments to HMOs, may be higher in states with few HMOs. However, national Medicare savings would still likely be small because California represents about 44 percent of all Medicare risk-contract HMO expenditures. The less restrictive the disenrollment policy is—in other words, the more opportunities beneficiaries have to change to fee for service—the smaller the potential savings. For example, if beneficiaries were permitted to disenroll and switch to fee for service during the first 90 days of membership, Medicare would realize some savings but fewer than under a more restrictive disenrollment policy. Our analysis of 1994 California data indicates that Medicare would have saved, at most, $22 million if beneficiaries had been permitted to disenroll to fee for service only within the first 90 days. These estimated savings probably represent the upper limit of what Medicare could have saved in California in 1994. Our estimates assume that beneficiary behavior and enrollment patterns would not change as the result of a limited disenrollment policy. However, as the following section discusses, beneficiary behavior will likely be affected by such a policy. (See app. III for further information regarding our analyses of potential Medicare savings.) According to HCFA officials and beneficiary advocates, limiting beneficiaries’ ability to disenroll from plans would remove a valuable beneficiary protection. Medicare’s current policy allows any beneficiary who is dissatisfied to disenroll and join a new plan or change to fee for service at the end of each month. Changing the disenrollment policy could also weaken plans’ incentive to maintain the quality of the services and care they provide. Finally, without the ability to disenroll, HCFA and HMOs believe that beneficiaries are likely to file more grievances and appeals. Although most beneficiaries do not change plans frequently, some HMOs have high member disenrollment rates, which can signal member dissatisfaction. We recently reported that one HMO in Miami and one in Los Angeles had 1995 disenrollment rates of 37 percent and 42 percent, respectively. One Miami HMO with high disenrollment rates had a 7-year history of Medicare deficiencies, including those involving beneficiary appeal rights and quality assurance. Thus, although most members appear to be satisfied with their HMO, problems do exist, and the freedom to disenroll provides a course of action for dissatisfied plan members. Some beneficiary advocates believe that to ensure continuity of care, beneficiaries should be able to disenroll from an HMO if their physician leaves the plan. In fact, this may be a common reason for switching HMOs or changing to fee for service. A 1992 study reported that 26 percent of beneficiaries who disenrolled from an HMO cited their doctor’s leaving the HMO as a reason for disenrolling. One large HMO told us that after terminating a contract with one of its physician groups, nearly all 1,668 members assigned to those physicians disenrolled from the HMO. Representatives of the plan believe that these members followed their physicians to a competing HMO that also contracted with the physician group. The Physician Payment Review Commission recently recommended that if a limited disenrollment policy is established, beneficiaries have the right to disenroll before year-end or to purchase services on a special point-of-service basis for the rest of the year if a plan makes a major change in its network of providers during the year.However, the Commission acknowledged that defining the precise circumstances for permitting disenrollment could be difficult. HMO representatives believe that beneficiaries’ current ability to disenroll at the end of any month is good for competition and, thus, good for consumers. The need to retain members who can disenroll motivates plans to maintain quality, work for member satisfaction, and improve benefits continuously throughout the year. For example, officials at one large HMO told us that it increased benefits three times in 1995 to remain competitive. The HMO increased its pharmaceutical benefit, reduced beneficiary copayments for office visits, and improved its dental coverage. Representatives of several HMOs told us that an enrollment policy that includes a 90-day disenrollment option would be better for beneficiaries than no disenrollment option at all but that the current practice of permitting monthly disenrollment is far better for industry competition and for beneficiaries. Many beneficiaries who disenroll from their risk HMO do so within the first 90 days. For example, of about 326,000 beneficiaries who joined a risk HMO during the first 3 months of 1995, 14.4 percent disenrolled within 1 year or less, but a disproportionate amount— 5.6 percent—disenrolled in less than 90 days. On the other hand, representatives of one HMO speculated that if beneficiaries were permitted just 90 days to disenroll, short-term disenrollment rates would soar. Beneficiaries who are less than completely satisfied with their HMO might quickly disenroll, rather than give their plan a chance to address their complaints. HCFA officials predict that without the option of disenrolling, dissatisfaction among HMO members would manifest itself in other ways, such as an increase in grievances to HMOs and appeals to HCFA—a prediction that was echoed throughout our visits to HMOs. This prediction is supported by data we obtained from one HMO. In 1995, over 90 percent of this plan’s Medicare group membership was “locked into” the HMO for the year. Because of conditions set by the beneficiaries’ former employers, these members could change plans only during annual enrollment periods. Group members filed grievances at a rate 100 times greater than that of individual members who could disenroll monthly. Group members filed 60 times more appeals than individual members. HMO representatives speculated that individual members who were dissatisfied simply disenrolled, rather than file grievances or appeals. HCFA officials and nearly all the HMOs we contacted shared a strong belief that limiting disenrollment opportunities would deter some beneficiaries from joining managed care, although none of the representatives could quantify the extent to which this would occur. Managed care is a relatively new concept to some Medicare beneficiaries, and a 1-year lock-in requirement could discourage beneficiaries from trying managed care. Some beneficiaries might not join HMOs because, even if dissatisfied with the care they received or denied a procedure they believed was critical, they would have little recourse available. Medicare has an appeals process in place, but, of course, beneficiaries have no guarantee that the appeal will be resolved in their favor. Some beneficiaries might not enroll in a plan if they knew they would not be able to follow their physicians, should the physicians leave the plan mid-year. Implementation of a limited enrollment period could strain HCFA’s resources by creating a peak load and by increasing HCFA’s responsibilities. HCFA’s enrollment and disenrollment activities would be concentrated in a short period of time, rather than spread out during the year. Also, HCFA would need to provide beneficiaries access to a consumer hot line and comparative plan information, both of which would likely be required under a limited enrollment period policy. HCFA could face problems in completing tasks such as processing enrollments. Currently, HCFA processes about 100,000 transactions a month, or 1.2 million transactions a year, which include enrollments, disenrollments, and status changes. Plans electronically submit these data, which are processed by computers at HCFA—generally within a few days. However, problems could arise, such as incomplete data or discrepancies in data, which could require follow-up work by HCFA. Some HCFA officials told us that the agency could manage the peak workload associated with a limited enrollment period. However, representatives of HMOs, other organizations, and even some HCFA officials said the agency sometimes had difficulty managing its current workload and meeting deadlines; they were skeptical of HCFA’s ability to handle a peak workload with current resources. CalPERS and FEHBP both operate a single annual enrollment period and face a peak load each year. CalPERS hires temporary workers and allows the permanent staff to work overtime hours. FEHBP contracts with a private firm to handle enrollment changes for federal retirees. (Each federal agency handles enrollment changes for its current employees.) HMOs that experience a peak load from their commercial business often hire temporary workers or shift employees from other departments within the HMO. HCFA might need to change other activities to accommodate the timing of a limited enrollment period. For example, every year HCFA announces risk HMO capitation payment rates in September. This allows HMOs time to decide whether they will renew their contract and to adjust premiums and benefits before the new contract cycle begins in January. Depending on the timing of the enrollment period, the announcement of the payment rates might need to occur earlier in the year so that HMOs could set premiums and benefits before Medicare’s open season. Sufficient time would also be needed for HCFA to produce and publish comparison charts as well as to review HMOs’ marketing materials. (See fig. 5.) Under a limited enrollment period policy, HCFA would likely be responsible for additional tasks. Some tasks would be new for HCFA; for example, the BBA envisioned that the agency would prepare and distribute comparative information. Other tasks would represent expansions of HCFA’s current role—for example, operating an information hot line for beneficiaries and resolving an increased volume of beneficiary complaints. The amount and extent of these tasks would, of course, depend on the specifics of the limited enrollment period policy enacted. HCFA has efforts under way to produce comparative health plan information but would need to take additional steps to distribute that information to beneficiaries. Two of HCFA’s regional offices have developed charts that compare local HMOs’ premiums and benefits, but these charts— although available upon request—are not widely distributed. The agency is working to make some HMO comparative information available on the Internet but has no plans to distribute printed information directly to beneficiaries. Currently, HCFA intends to leave information distribution to beneficiary advocates and federally supported insurance counselors. Although HCFA has an information hot line for Medicare beneficiaries with questions about Medicare, the system would likely be inadequate to handle the volume of calls generated under a limited enrollment period policy.Representatives of HMOs, beneficiary advocacy groups, and benefit consulting firms cautioned us that older people need time to understand their options. Older people also seek considerable information before deciding to join an HMO. Some large national brokers operate hot lines for their client companies. These hot lines, staffed by trained counselors who are familiar with Medicare and the company’s specific plan, answer questions posed by the company’s retirees. Officials told us that these hot lines need to be able to handle a large volume of calls. For example, the hot line for one company (with 57,000 retirees) received about 1,000 calls a day from the retirees during the 1995 enrollment season—even though retirees not changing plans did not have to re-enroll. Some retirees called repeatedly with questions about each step of the application and enrollment process. HCFA plans to test the distribution of special handbooks and detailed comparison charts as part of its Medicare Competitive Pricing Demonstration Project. These documents would contain information on managed care plans and fee for service with Medigap that would help beneficiaries make enrollment choices. HCFA also intends to make a telephone counseling center and educational seminars available to beneficiaries with questions. However, the demonstration project has already been postponed once. According to HCFA officials, it is now scheduled to begin during 1997. In addition to preparing comparative information and operating a hot line, HCFA would need both guidelines and procedures under which it would allow beneficiaries to change plans outside the open season. With a limited enrollment period, beneficiaries would be expected to change plans only during the designated open season. However, as in other programs with limited enrollment periods, exceptions would likely be allowed. The BBA specified several conditions under which beneficiaries could change plans outside the enrollment period. Some conditions—for example, a beneficiary’s moving out of a plan’s service area—would be easy for HCFA to evaluate and determine whether a plan switch would be allowed. However, other conditions specified in the BBA would require HCFA to investigate the specific case before making a determination. For example, the BBA would have allowed beneficiaries to disenroll if they could demonstrate that the health plan had materially misrepresented the plan’s provisions in its marketing. Encouraging enrollment in a managed care plan can help the government’s efforts to reduce high service utilization in the Medicare program without unduly diminishing beneficiary access to services. To the extent that enrollment and disenrollment policy revisions force health plans to retain and serve Medicare’s more costly beneficiaries, the government can battle the effects of the high utilization tendency inherent in unmanaged fee-for-service reimbursement. However, these same policy revisions could produce disincentives and obstacles to greater managed care enrollment—for beneficiaries, health plans, employers, and HCFA—thereby undermining the government’s very effort to lower utilization. In fact, an annual limited enrollment period, along with restricted disenrollment options, could have little impact on overall Medicare spending. Although such a policy would reinforce the concept of managed care and reduce the opportunities for less healthy HMO enrollees to change to Medicare fee for service, our analysis suggests that the savings might be relatively small. For example, if enrollment and disenrollment had been limited for California beneficiaries in 1994, Medicare savings would have been—at most—$20 million to $30 million. In contrast, Medicare spent $4.2 billion on payments to California HMOs during that year. Moreover, an enrollment policy change would likely have several unintended consequences, including the loss of important beneficiary protections and complications for many employers who offer managed care to their retirees. The result could well be substantially slower growth in Medicare managed care and increased beneficiary dissatisfaction. The magnitude of these impacts would depend, however, on the details of the adopted policy, beneficiary and employer reaction to those details, and the effects of any other policy changes made at the same time. We provided copies of this report to officials of HCFA’s Office of Managed Care. HCFA agreed that the monthly disenrollment option is an important consumer protection. Our report indicates that changing Medicare’s current policy of allowing beneficiaries to switch among HMOs or between an HMO and fee for service could have far-reaching consequences. We reported that this view is shared by beneficiary advocates and HMO officials, who also believe that eliminating this option would deter some beneficiaries from joining a managed care plan. HCFA also stated that any analysis of beneficiary choice issues should examine Medigap policy. Our report notes that under current law, beneficiaries have no guarantee that a Medigap policy will always be available to them when they disenroll from an HMO. As a result, they may be reluctant to join an HMO. HCFA commented that it supports changes to the Medigap statute so that beneficiaries dissatisfied with their managed care plan would be able to return to fee for service and to the Medigap policy of their choice. In a 1996 report, we made a similar recommendation. HCFA’s comments appear in appendix IV. As arranged with your office, unless you announce its contents earlier, we plan no further distribution of this report until 5 days after the date of this letter. At that time, we will send copies to the Secretary of Health and Human Services. We will make copies available to others on request. Please contact me at (202) 512-7114 if you or your staff have any questions. Major contributors to this report are listed in appendix V. To help us analyze the impact of a limited enrollment period, with limited disenrollment, we looked at the Federal Employees’ Health Benefits Program (FEHBP). We selected FEHBP because it is a large employer- sponsored health insurance program that conducts an annual enrollment period (called an “open season”) and, like Medicare, offers members a choice of health plans. FEHBP is the largest employer-sponsored health insurance program in the world. The Office of Personnel Management (OPM) administers the program, which went into effect on July 1, 1960. FEHBP currently provides voluntary health insurance coverage for about 9 million people, including 2.3 million active employees, 1.8 million retirees, and 5 million dependents. In fiscal year 1995, FEHBP spent about $17.7 billion to cover its members. FEHBP outperforms Medicare—and probably private plans—in controlling health care costs. The federal government and FEHBP members share program costs. The government contribution is readjusted annually. For 1997, the federal government’s maximum annual contribution is $1,630 for individuals and $3,510 for families. The beneficiary’s contribution for individual coverage ranges from about $400 to $2,000 or more, and family coverage ranges from $800 to almost $5,000. In 1994, 59 percent of FEHBP retirees aged 65 or older also enrolled in Medicare, although enrollment is not mandatory. When a retiree enrolls in Medicare, FEHBP serves as a supplemental insurance policy. FEHBP plans must waive deductibles, copayments, and coinsurance for services covered by both programs. Retirees pay the same premiums as current employees. FEHBP offers a selection of several types of health plans, including many managed care plans. As in Medicare, the number of plans offered to members varies by location. Most of the fee-for-service plans (12 of 15) offer a preferred provider organization option. Under FEHBP, individual health plans establish their own relationships with providers, process individual claims, develop benefits, and devise marketing strategies. Since 1980, the number of HMOs has increased significantly. By 1995, about 29 percent of FEHBP members were enrolled in HMOs. Like the Medicare population, however, a much lower proportion of older retirees were enrolled in HMOs; by 1996, only about 12 percent of FEHBP members aged 65 and older and 10 percent of Medicare beneficiaries were enrolled in HMOs. Studies have shown that FEHBP members who choose an HMO are generally younger and healthier than members who select fee-for-service plans. OPM administers FEHBP, although each federal agency collects information and premiums from employees. OPM also interprets the health insurance laws, writes regulations, and resolves disputed claims. OPM approves qualified plans for participation in the program and negotiates with plans nationwide to determine benefits and premiums for the following year. OPM also publishes enrollment and health plan information, including charts that compare benefits and premiums. OPM requires that the same premium be offered to employees and retirees, regardless of their age, gender, or health status. It also requires that national plans offer the same premium nationwide. Local plans may offer local rates. FEHBP holds one annual open season, during which employees and retirees may voluntarily enroll in a plan, change plans or options within a plan, or change from individual to family coverage. Most changes by retirees occur during the first 2 weeks of the enrollment season. In 1996, open season occurred between November 11 and December 9; changes made during open season became effective on January 5, 1997—the first day of the next insurance year. Each year, about 5 to 10 percent of beneficiaries change plans. Most federal employees remain members of FEHBP when they retire; they are familiar with how the open season works and with how to obtain their health plan information. However, retirees who choose to disenroll from the program cannot return unless they had joined a Medicare risk HMO. They are required to sign a form to show they understand that they cannot subsequently rejoin. Retirees can receive health plan information from health fairs; from FEHBP directly; and from the National Computer System (NCS), an Iowa City, Iowa, contractor that conducts retiree enrollment activities. Each year, some members of the Congress sponsor local health fairs for federal employees and retirees. Most of the people who attend such fairs are retirees, in part because current employees attend employer-sponsored health fairs. Retirees can also call FEHBP directly to request information or to discuss their options. The Retirement Information Office receives about 6,000 calls a month, with about 25 percent of the calls focusing on health plans. During open season, the Health Benefits Branch receives about 500 calls a day requesting information. NCS, however, does not deal directly with retirees. Occasionally, it receives calls from retirees but refers them to OPM. For the past 10 years, OPM has had a contract with NCS to handle printing, distribution, processing, and brochure requests. OPM sought a contractor because it wanted to use technology, such as scanning and other automated equipment, that OPM did not have. Also, NCS can hire temporary workers during busy times of the year; OPM does not have the staff to handle retiree enrollment. OPM believes that the third-party contract with NCS is more efficient and less expensive than if OPM was to do the work in house. About June of each year, OPM designs a health benefits application form and sends it and a computer tape of the retiree rolls to NCS. NCS waits until approximately the first week in September, when the OPM Policy and Information Office produces the final list of plans and premiums. Then, NCS prints a final list of available plans. In addition, it prints the comparative information with a rate sheet and envelopes with addresses. At the end of October, NCS mails to retirees an E-Z application form, an instruction form with the rates, and a return envelope. Retirees who want to change plans return their forms to NCS, which enters the change on its computer and sends the information to OPM weekly during open season.OPM notifies plans of any changes. When retirees receive the information from NCS, they can request an enrollment change or request additional information on specific plans. Unless they request information from NCS, they will only receive it from their current plans. Those who do not return their forms automatically remain in the plan to which they belonged the previous year. HMOs supply plan information to FEHBP, which distributes it to retirees through NCS. HMOs can also market to retirees through advertisements in newspapers, radio, and on television. However, they generally do not contact retirees directly unless a retiree is already a member of the HMO. In contrast, Medicare risk HMOs are responsible for marketing to prospective members; HCFA does little to provide plan information directly to beneficiaries. In addition to doing the same kind of mass media advertising as FEHBP HMOs, Medicare risk HMOs are permitted to conduct one-on-one and group meetings. Medicare HMOs rely heavily on these techniques to attract new members. To help us understand the impact of a limited enrollment period, we examined the California Public Employees’ Retirement System (CalPERS). As with FEHBP, CalPERS is a large organization that conducts an open season each year and offers members a choice of health plans. For about 35 years, CalPERS has offered health insurance to employees of public agencies. In 1995, CalPERS had about 1 million members and paid $1.5 billion in health care premiums. The organization has two divisions. The Health Plan Administration Division negotiates contracts and rates with the HMOs. The Health Benefit Services Division handles enrollments or changes in plans and conducts educational activities for members. Each year, the Health Benefit Services Division processes about 120,000 enrollment documents. CalPERS offers members a choice of 22 plans. During the open season, plans must accept enrollees regardless of health status, age, or previous medical condition. CalPERS encourages its members to join an HMO by allowing members to choose from among 16 HMOs, including 9 Medicare risk HMOs. Currently, about 76 percent of CalPERS members are enrolled in HMOs. For people who are eligible for Medicare, the advantage of enrolling in an HMO through CalPERS is that CalPERS will reimburse them for the Medicare part B premium. If retirees were not enrolled in a CalPERS health plan at the time they retired, they are not eligible to enroll during their retirement. Also, CalPERS offers HMO benefits, such as prescription drugs, that are better than the benefits people could obtain individually. To make comparisons easier for members, CalPERS requires HMOs to offer similar coverage. In addition, plans cannot charge more than the standard premium, which is the same for anyone enrolling in the specific plan. The amount an employer contributes to a premium varies among the public agencies participating in CalPERS. CalPERS has one annual open season. During 1996, the dates were changed from an open season beginning May 1 with an effective date of August 1 to an open season beginning September 1 with an effective date of January 1, 1997. CalPERS changed its season to coordinate its deductibles with its preferred provider organizations and with other state benefits such as the vision and dental care programs. The preferred provider organizations with which CalPERS contracts and the other state programs operate on a calendar year. CalPERS officials told us that they found the process of shifting the health benefits cycle manageable and not very costly but that the organization needed about 15 months to prepare for the change. Retirees who want to change plans visit the CalPERS office in person or submit a written request. Medicare beneficiaries must notify CalPERS in writing of a change in enrollment. CalPERS instructs Medicare beneficiaries to mail their enrollment information directly to the HMO of their choice during open season. The plan sends the new enrollment information to HCFA. CalPERS officials characterized the peak load associated with open season as a time when the staff members are “basically busier.” To handle the peak load, the organization hires temporary workers and allows its permanent staff to work overtime hours. Educating members is an important task for CalPERS, especially educating older people who fear signing over their Medicare cards to an HMO. CalPERS sponsors retirement seminars for active employees who are within 5 years of retirement. It also offers 4-hour individual sessions for people who will retire soon. During the open season, CalPERS provides generic educational information to its members. For example, CalPERS publishes a booklet annually that describes the features of each plan. It also publishes a companion booklet that contains comparisons of the quality and performance of plans. In 1995, CalPERS sent the books directly to all members. In past years, CalPERS held quarterly informational seminars for retirees; however, the seminars were discontinued because of poor attendance. CalPERS mails an exit survey to members who leave a plan to determine why they left. Last year, it mailed 15,227 surveys to members with basic coverage and 1,535 to members with supplemental and managed care plans. In 1995, CalPERS also sent members a survey that measured member satisfaction. This survey was sent to a random sample of members of various plans. Findings from the exit survey allow CalPERS staff to evaluate the medical care and services the members receive as well as discuss areas of dissatisfaction with HMO representatives during contract negotiations. CalPERS officials believe that the two surveys provide members with a balanced perspective of member experience with their health plan. CalPERS, like FEHBP, restricts HMOs’ ability to market directly to members, although general marketing takes place statewide. Plans are not allowed to use gifts as incentives and are prohibited from directly soliciting people who are not members of their plan. CalPERS officials have no data on the number of members who travel seasonally (“snowbirds”). However, they estimate that between 8 and 10 percent of their Medicare enrollees might be snowbirds. To assist such members in receiving health services, CalPERS has encouraged HMOs to develop reciprocal agreements with other plans. We assumed that a new Medicare enrollment policy might be similar, but not necessarily identical, to the provisions contained in the conference report that accompanied the Balanced Budget Act of 1995 (BBA), H.R. 2491. Therefore, we developed and analyzed a limited enrollment period policy modeled on the BBA. Although other alternatives are available to Medicare beneficiaries, we focused our attention on enrollment in risk HMOs because they currently serve most beneficiaries not in Medicare fee for service. The hypothetical policy we used to guide our analysis had three basic characteristics: One enrollment period and one date when benefits became effective would be specified. However, beneficiaries could elect coverage when they first became eligible for Medicare benefits regardless of the time of year this occurred. The Secretary of HHS would be responsible for producing and distributing comparative plan information to beneficiaries as well as making a hot line available to them. Beneficiaries could disenroll from an HMO during the year, but they would automatically be enrolled in fee for service. Beneficiaries could switch to another HMO during the year only under limited circumstances, including moving out of their HMO’s service area. We also analyzed the effect of limiting beneficiaries’ disenrollment options under two alternative scenarios: no disenrollment would be allowed, except under specified circumstances, such as moving out of the health plan’s service area; and disenrollment would be allowed for any reason during the first 90 days after coverage was effective, but no disenrollment would be allowed after 90 days except under specified circumstances. To gather information on the likely effects of a limited enrollment period and limited disenrollment opportunities, we interviewed representatives of 10 Medicare risk HMOs, the American Association of Health Plans, HCFA, national benefits consulting firms, selected large employers who offer managed care options to retirees, Medicare beneficiary advocacy organizations, FEHBP, and CalPERS. In addition, we surveyed HMOs with Medicare risk contracts regarding their employer group business. We analyzed HMO disenrollment data and fee-for-service claims in California to estimate potential Medicare savings from limiting disenrollment. To estimate the potential Medicare savings that a policy limiting disenrollment opportunities might generate, we compared 1994 Medicare expenditures for California beneficiaries who changed from an HMO to fee for service with the expenditures that Medicare would have incurred had these beneficiaries been required to remain in their HMO throughout the year. We limited our analysis to California beneficiaries to reduce the computational burden. Nonetheless, because Medicare HMO enrollment is concentrated in a relatively small number of states—including California—our analysis covers about 36 percent of all Medicare beneficiaries enrolled in a risk HMO in 1994. We selected our sample population using 1994 data from HCFA’s Enrollment Database. We identified 738,000 Medicare beneficiaries who met the following criteria: in January 1994 they belonged to a risk HMO, they were eligible for Medicare parts A and B, and they reported living in the same county 1 year later (in January 1995). We then identified a subset of 15,772 beneficiaries who changed to fee for service for 1 or more months during 1994. We computed the amount that Medicare would have paid for each of the 15,772 beneficiaries if they had remained in their HMO for the entire year. This amount varies by beneficiaries’ county of residence and demographic and other factors. We then calculated the amount Medicare actually spent on these beneficiaries in 1994—that is, the capitation payments for the period they were enrolled in an HMO plus their claims payments for the period they were in fee for service. Finally, we estimated potential savings by subtracting the amount Medicare would have paid if the 15,772 beneficiaries had remained in HMOs from the amount Medicare actually paid during the year. To estimate potential savings of a policy that would allow beneficiaries to return to fee for service during the first 90 days, we followed the same steps, but included only those 11,684 beneficiaries who changed to fee for service on April 1, 1994, or later. (These estimates are reported in table 2.) Our estimates are probably upper bounds on potential savings in California. If a limited disenrollment policy discouraged some beneficiaries from initially enrolling in an HMO, potential savings could be lower. Whether potential national savings can be extrapolated using our estimates for California depends on whether beneficiaries switch to fee for service at the same rate in other states as they do in California. Nonetheless, the behavior of Californians would heavily influence estimates of national savings because that state accounted for 44 percent of all payments to Medicare risk HMOs in 1994. To collect information on contracts between Medicare HMOs and employer groups, we mailed a survey to all 118 HMOs that had risk contracts in effect on January 1, 1995. Eighty-three percent of the HMOs responded to our survey and provided us with summary data on retiree group contracts, including whether the contracts had a limited enrollment period and a lock-in requirement. Medicare: HCFA Should Release Data to Aid Consumers, Prompt Better HMO Performance (GAO/HEHS-97-23, Oct. 22, 1996). Medicaid: States’ Efforts to Educate and Enroll Beneficiaries in Managed Care (GAO/HEHS-96-184, Sept. 17, 1996). Medigap Insurance: Alternatives for Medicare Beneficiaries to Avoid Medical Underwriting (GAO/HEHS-96-180, Sept. 10, 1996). Medicare HMOs: Rapid Enrollment Growth Concentrated in Selected States (GAO/HEHS-96-75, Jan. 18, 1996). Medicare: Increased HMO Oversight Could Improve Quality and Access to Care (GAO/HEHS-95-155, Aug. 3, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed how a limited enrollment period would affect the Medicare program, private health plans, beneficiaries, and employers who provide Medicare supplemental benefits to retirees, focusing on: (1) the growth of Medicare's managed care program; (2) employers' attempts to administer their respective benefits seasons; (3) taxpayer savings measured against beneficiary protections; and (4) the resources needed by the Health Care Financing Administration (HCFA), which runs Medicare's day-to-day operations. GAO noted that: (1) changing Medicare's current policy that allows beneficiaries to switch among health maintenance organizations (HMO) or between an HMO and fee for service monthly would have far-reaching consequences for the Medicare program, beneficiaries, HMOs, employers, and HCFA; (2) the specific effects would depend on the limits placed on switching plans; (3) any change that restricts beneficiary opportunities to enroll or disenroll would likely slow the growth of Medicare managed care; (4) a limited enrollment period for Medicare could have two principal advantages: (a) to improve the quality and distribution of managed care information to beneficiaries: a focused enrollment period would create a natural opportunity for HCFA to provide objective, comparative information about health plans; and (b) to make impractical the current practice of in-home sales of HMOs, a source of marketing abuses, which are difficult for HCFA to deter; (5) a limited enrollment period could also have several of the following disadvantages, the combined effect of which could slow Medicare managed care enrollment growth: (a) lessen the effectiveness of marketing Medicare HMOs: HMOs would likely focus more of their marketing dollars on mass media campaigns concentrated around Medicare's enrollment season, but beneficiaries unfamiliar with managed care might not receive enough specifics through mass marketing to appreciate any advantages offered by an HMO over traditional fee-for-service Medicare; (b) lessen the attractiveness of HMOs to beneficiaries: the only choice available to dissatisfied HMO enrollees might be to change to fee for service and pay either Medicare's deductibles and coinsurance or, if available, premiums for a supplemental Medigap policy; and (c) pose considerable administrative obstacles for employers: accommodating Medicare's schedule could be so administratively difficult that some employers might simply stop offering a managed care option to their retirees; (6) limiting beneficiaries' option to change to fee-for-service Medicare except during the officially appointed open season could also produce the following mixed effects: (a) Medicare might achieve modest savings on money now spent on services for HMO members who change to fee for service; and (b) beneficiaries would lose an important consumer protection and might be less willing to enroll in managed care; (7) ultimately, changing Medicare's HMO enrollment and disenrollment policies could have unintended effects; and (8) savings could be offset if policy changes also led to slowing or reducing the enrollment of Medicare beneficiaries in HMOs.
program baseline was finalized in March 2012, and since that time costs have remained relatively stable. Table 1 notes the significant cost, quantity, and schedule changes from the original program baseline and the relative stability since the new baseline was established. At the time the new F-35 acquisition program baseline was finalized, it did not identify new initial operational capability (IOC) dates for the three military services. The following year DOD issued a memorandum noting that Marine Corps and Air Force were planning to field initial operational capabilities in 2015 and 2016, respectively, and that the Navy planned to field its initial capability in 2018. The memorandum emphasized that the Marine Corps and Air Force initial operational capabilities would be achieved with aircraft that possess initial combat capabilities, and noted that those aircraft would need additional lethality and survivability enhancements to meet the full spectrum of warfighter requirements in the future. These new parameters represented a delay of 5 to 6 years from the program’s initial 2001 baseline and a reduction in the capabilities expected at IOC. In March 2005 we recommended that DOD implement an evolutionary, incremental approach to developing and fielding the F-35—then known as the Joint Strike Fighter—to ensure that the warfighters would receive an initial combat capability that, at a minimum, would meet their most immediate needs. Again in March 2010, we recommended that DOD identify the absolute minimum combat capabilities that would be acceptable by each of the military services to field their initial operational capabilities and establish reasonable, realistic timeframes for achieving those requirements. In both instances, we noted that the military services should consider trading off desired capabilities in order to more rapidly field aircraft with an initial set of useable capabilities and that any capabilities not needed to meet immediate warfighting needs should be deferred to a future development increment. In both instances, DOD agreed with the intent of our recommendation, but believed that its existing program management practices were sufficient. Delays in the testing of critical mission systems software have put the delivery of expected warfighting capabilities to the Marine Corps at risk, and could affect the delivery of capabilities to the Air Force and Navy as well. F-35 developmental flight testing is separated into two key areas: mission systems and flight sciences. Mission systems testing is done to verify that the software and systems that provide critical warfighting capabilities function properly and meet requirements, while flight science testing is done to verify the aircraft’s basic flying capabilities. In a March 2013 report we found that development and testing of mission systems software was behind schedule, due largely to delayed software deliveries, limited capability in the software when delivered, and the need to fix problems and retest multiple software versions. These same challenges continued thorough 2013, and as a result progress in mission systems testing has been limited. The Director of Operational Test and Evaluation (DOT&E) predicts that the delivery of expected warfighting capabilities to the Marine Corps could be delayed by as much as 13 months. Delays of this magnitude could also increase the already significant concurrency between testing and aircraft procurement and result in additional cost growth. Although mission systems testing is behind, the F-35 program has been able to accomplish nearly all of its planned flight science testing. The program also continued to make progress in addressing key technical risks, although some of that progress has been limited. While the F-35 program was able to accomplish all of the mission system test flights it had planned in 2013, it did not accomplish all of the planned falling short by 11 percent. The F-35 program planned to fly test points,329 mission systems test flights and accomplish 2,817 test points in 2013. The program actually flew 352 test flights, exceeding the goal, but only accomplished 2,518 test points. According to program and contractor officials, slow progress in developing, delivering, and testing mission systems software continues to be the program’s most significant risk area. The F-35 program is developing and fielding mission systems software capabilities in blocks: (1) Block 1, (2) Block 2A, (3) Block 2B, (4) Block 3i, and (5) Block 3F. Each subsequent block builds on the capabilities provided in the preceding blocks. Blocks 1 and 2A provide training capabilities and are essentially complete, with some final development and testing still underway. Blocks 2B and 3i provide initial warfighting capabilities and are needed by the Marine Corps and Air Force, respectively, to achieve initial operational capability. Block 3F is expected to provide the full suite of warfighting capabilities, and is the block the Navy expects to have to achieve its initial operational capability. Developmental testing of Block 2B software is behind schedule and will likely delay the delivery of expected warfighting capabilities. The delivery of this software capability is of high near-term importance because it provides initial warfighting capability for the overall F-35 program, and is needed by the Marine Corps to field its initial operational capability in July 2015. As of January 2014, the program planned to have verified the functionality of 27 percent of the software’s capability on-board the aircraft, but had only been able to verify 13 percent. This leaves a significant amount of work to be done before October 2014, which is when the program expects to complete developmental flight testing of this software block. According to DOT&E, Block 2B developmental testing will not be completed as scheduled and could be delayed by as much as 13 months, as the program has had to devote time and resources to address problems and completing development of prior software blocks. Delays of this magnitude would mean that the Marine Corps will not likely have all of the capabilities it expects in July 2015, its planned initial operational capability date. At this time it is not clear exactly which of the expected capabilities will be available as testing is still ongoing. The effects of these delays compound as they also put the timely delivery of Air Force and Navy initial operational capabilities at risk. The Air Force expects to field its initial operational capability in August 2016 with Block 3i aircraft that possess the same warfighting capabilities as the Marine Corps aircraft, but have upgraded computer hardware. The Navy plans to achieve its initial operational capability in August 2018 with Block 3F aircraft that possess the full suite of F-35 warfighting capabilities. Program and contractor officials have stated that while they recognize that the program faces software risks, they still expect to deliver all of the planned F-35 software capabilities to the military services as currently scheduled. However, given the uncertainty in mission systems software testing and the significance of the F-35 to future force structure plans, it is important that the military services have a clear understanding of the specific capabilities that they can realistically expect to receive, and those capabilities that are not likely to be delivered by their initial operational capability dates—the first of which is scheduled for July 2015. In addition, because the F-35 is DOD’s most costly and ambitious acquisition program, it is important that DOD keep Congress informed about the status of the program. Without a clear understanding of the specific capabilities that will initially be delivered, Congress and the military services may not be able to make fully informed resource allocation decisions. Delays in mission systems software testing could also increase costs. As currently planned, DOD expects to complete developmental flight testing in 2017. If the flight test schedule is extended, the program may have to retain testing and engineering personnel longer than currently expected, which would increase development cost. DOD currently expects to have invested $70.2 billion to procure 359 aircraft by 2017 when developmental flight testing is scheduled to end. Our past reports have concluded that purchasing aircraft while concurrently conducting developmental flight testing increases the risk that problems will be discovered late in testing and additional funding will be needed to rework aircraft that have already been purchased. If F-35 procurement plans remain unchanged and developmental testing continues into 2018, the cost risks associated with concurrency will likely increase as DOD expects to have invested $83.4 billion in 459 aircraft by that point in time. The F-35 contractor recognizes that additional testing efficiencies are important in order to deliver capabilities on schedule and cost. One way it plans to gain efficiency is to use test results from one F-35 variant to close out test points for the other two variants in instances in which the variants have common functions. According to test officials, most mission systems testing can be accomplished on any variant and only a limited amount of variant-specific testing is required. Contractor officials pointed out that this type of efficiency would help mitigate some testing risk, but they also recognized that it will still be difficult to make up the lost time in the test program. They noted that delays in specific test events generally impact the entire test schedule because the ability to conduct future testing is often dependent on the completion of the earlier events. The program accomplished nearly all of the flight sciences testing, including weapons testing, it had planned for 2013. As of December 2013, the program had achieved half of the total number of test points required to complete all flight science testing for the program. Figure 1 below shows the number of flight science test points planned and accomplished for each F-35 variant as of December 2013, and also identifies the total number of test points required for each variant to complete flight science testing. The program made progress despite the fact that flight testing was halted twice at the beginning of the year to investigate and fix cracks in an engine fan blade and leaky fuel hoses. In addition, program and contractor officials emphasized that employee furloughs that occurred in 2013, due to mandatory sequestration, limited the amount of flight testing that could be done during that time as well.science and weapons testing accomplishments included: Some of the key flight Conventional takeoff and landing variant – The program successfully demonstrated the variant’s ability to launch AIM-120 missiles from its internal weapons bay and to refuel while in flight. The program also continued testing the aircraft’s ability to function at high vertical flight angles, although program officials noted that the testing took longer than expected. As of December 2013, the program had accomplished 59 percent of its total expected flight science test points for this variant. Short takeoff and vertical landing variant – The program successfully demonstrated the STOVL’s ability to takeoff vertically, launch weapons from its internal weapons bay, and dump fuel when needed. In addition, the program conducted some testing of the variant at sea on an amphibious assault ship—specifically the USS WASP. As of December 2013, the program had accomplished 49 percent of its total expected flight science test points for this variant. Carrier-suitable variant – The program began testing the capability of the aircraft to function at high vertical flight angles. In addition, the program successfully demonstrated the aircraft’s ability to dump fuel when needed. Program and contractor officials noted that the program also made progress to begin testing to verify that the aircraft’s new arresting hook system could successfully catch a cable on a set of carrier arresting gear installed onshore at the Lakehurst facility. As of December 2013, the program had accomplished 43 percent of its total expected flight science test points for this variant. Developmental testing is not the only testing that the program still has to complete. The F-35 program is also scheduled to begin operational testing in June 2015, to determine that the aircraft variants can effectively perform their intended missions in a realistic threat environment. While the F-35 program made progress addressing some key technical risks in 2013, it continued to encounter slower than expected progress in developing the Autonomic Logistics Information System (ALIS). Over time, we have reported on 4 areas of technical and structural risk that the program identified during flight, ground, and lab testing that if not addressed, could result in substantially degraded capabilities and mission effectiveness. In 2013, we found that the program made the following progress in each of those areas: Helmet mounted display - provides flight data, targeting, and other sensor data to the pilot, and is integral to reducing pilot workload and achieving the F-35’s concept of operations. The original helmet mounted display encountered significant technical deficiencies, including display jitter, the undesired shaking of the visor display, and latency, the perceivable lag that occurs in transmitting sensor data, and did not meet warfighter requirements. The program made adjustments to the helmet design, including adding sensors to lessen the display jitter, and redesigning elements to minimize latency. The program tested these design changes in 2013 and found that most of the technical deficiencies had been adequately addressed, and that the helmet’s performance was sufficiently suitable to support Marine Corps initial operational capability in 2015. DOT&E and program test pilots noted that the current night vision camera continues to have problems. The program has identified a new camera that it believes will address those problems, but that camera has not been fully tested to verify its capabilities. Arresting Hook System - allows the F-35 carrier-suitable variant to engage landing wires on aircraft carriers, was redesigned after the original hook system was found to be deficient. The program determined that the original hook assembly was not strong enough to reliably catch the wire and stop the airplane. As a result, the program modified the hook system’s hydraulic components, and made structural modifications to the plane. In March 2013, the program completed a critical design review of the hook system to verify that the new design is sound. Land testing of the redesigned system has been successful, and the program anticipates that it will be ready for carrier testing in October 2014. Durability - structural and durability testing of the aircraft continued in 2013, and the program completed the first round of this testing on all three variants. The conventional takeoff and landing variant and the short takeoff and vertical landing variants have also started their second round of testing. During this second round of testing, the short takeoff and vertical landing test aircraft developed bulkhead cracks at the equivalent of 17 years of service life. Contractor officials noted that they were working to develop a solution to those cracks, but the total cost and schedule impacts of these bulkhead cracks are unknown at this time. Autonomic Logistics Information System - an important tool to predict and diagnose maintenance and supply issues, automate logistics support processes and provide decision aids aimed at reducing life-cycle sustainment costs and improving force readiness. ALIS is being developed and fielded in increments. In 2013, the program had to release an update to the first increment because problems were discovered after the increment was released to the testing locations. The additional time to develop and field this update will likely delay the delivery of future increments. The program completed site activation of ALIS systems at some training and testing locations, and is in the process of adding capabilities and maturing ALIS in a second increment to support the Marine Corps’ initial operational capability. DOT&E notes that, although the second increment is scheduled to be delivered in time to support the Marine Corps’ initial operational capability, there is no margin for error in the development schedule. Testing of this ALIS increment is about two months behind largely due to a lack of test facilities. Program officials note that they are in the process of adding facilities. The third, and final, increment of ALIS that provides full capability is not expected to be released until 2016. The F-35 program’s high projected annual acquisition funding levels continue to put the program’s long-term affordability at risk. Currently the acquisition program requires $12.6 billion per year through 2037, which does not appear to be achievable given the current fiscal environment. The program is reducing unit costs to meet targets, but a significant amount of additional cost reduction is needed if it expects to meet those targets before the beginning of full rate production—currently scheduled for 2019. Additionally, the most recent life-cycle sustainment cost estimate for the F-35 fleet is more than $1 trillion, which DOD officials have deemed unaffordable. The program’s long term sustainment estimates reflect assumptions about key cost drivers that the program does not control, including fuel costs, labor costs, and inflation rates. The program is also focusing on product reliability, which is something that the program can control, and something we have found in our prior best practices work to be a key to driving down sustainment costs. According to program reliability data, each F-35 variant was tracking closely to its reliability plan as of December 2013, although the program has a long way to go to achieve its reliability goals. The overall affordability of the F-35 acquisition program remains a significant concern. As of March 2013, the program office estimated that the total acquisition cost will be $390.4 billion. DOD’s estimated annual funding levels to finish development and procurement of the F-35 are shown in figure 2. From fiscal years 2014 to 2018, DOD plans to increase development and procurement funding for the F-35 from around $8 billion to around $13 billion, an investment of more than $50 billion over that 5-year period. This build-up will occur during years of potential reductions in DOD’s budget as a result of sequestration. From fiscal year 2014 through fiscal year 2037, the program projects that it will require, on average, development and procurement funding of $12.6 billion per year, with several peak years at around $15 billion. Such a high average annual cost requirement poses affordability risks. At $12.6 billion a year, the F-35 acquisition program alone would consume around one-quarter of all of DOD’s annual major defense acquisition funding. Therefore, any change in F-35 funding is likely to affect DOD’s ability to fully fund its other major acquisition programs. In addition, maintaining this level of sustained funding will be difficult in a period of declining or flat defense budgets and competition with other large acquisition programs such as the KC-46 tanker and a new bomber. These costs do not include the costs to operate and maintain the F-35s as they are produced and fielded. Recognizing the affordability challenges posed by the F-35 program, the Under Secretary of Defense for Acquisition, Technology, and Logistics established affordability unit cost targets for each F-35 variant to be met by the start of full rate production in 2019. The program is likely to be challenged to meet those targets, as the three variants still require anywhere from $41 million to $49 million in unit cost reductions (see table 2). In addition, the program’s current funding and quantity projections indicate that unit costs in 2019 could actually be higher than the targets. The Under Secretary issued a memorandum in April 2013 explaining that affordability constraints are intended to force prioritization of requirements, drive performance and cost trades, and ensure that unaffordable programs do not enter the acquisition process. The memorandum goes on to state that “if affordability caps are breached, costs must be reduced or else program cancelation can be expected.” The F-35 program made progress this year in decreasing the unit costs of the conventional take-off and landing and carrier-suitable variants, but the unit cost of the short takeoff and vertical landing variant increased by nearly $10 million. According to program officials, the unit cost of the short takeoff and vertical landing variant increased because the program had to delay the procurement of a number of aircraft into the future, which reduced near-term quantities and made each individual unit more costly, and engine costs were higher than originally estimated. There is still uncertainty surrounding these estimates depending upon how DOD chooses to implement sequestration in future budgets. In addition to the concerns about the affordability of the F-35 acquisition program, there are also significant concerns about the cost of operating and supporting the F-35 fleet over the coming decades. Currently, the Cost Assessment and Program Evaluation (CAPE) office, within the Office of the Secretary of Defense, estimates that the cost to operate and support the fleet over 30 years is likely to exceed $1 trillion, which is 3 times higher than what was projected when the development program began in 2001. CAPE’s estimates also indicate that F-35 operations and support costs could surpass the average cost of legacy aircraft by 40 percent or more, when original estimates indicated that the F-35 would cost less than the legacy aircraft. Program officials recently stated that their estimates indicate that operation and support costs are likely to be closer to $860 billion, and not the $1 trillion estimated by CAPE. According to CAPE, program, and contractor officials, F-35 sustainment cost estimates differ as the assumed future values for key cost drivers, like inflation rates and fuel costs, vary among cost estimators. CAPE officials emphasize that the difference between cost estimates is almost entirely attributable to the use of different inflation indices. Table 3 below lists the top cost drivers in the F-35 operation and sustainment estimates. While it is important for the program to consider potential reductions or increases in these variables listed below as it estimates the F-35’s long- term operation and sustainment costs, some of those variables can be directly controlled by the program office while others like inflation rates and fuel costs cannot. The F-35 program office and prime contractor are working to make the long-term program more affordable. Starting in September 2013, they established a sustainment cost initiative team to meet regularly and discuss options for driving down sustainment costs. According to contracting officials, they also developed a management team dedicated to improving the aircraft’s prognostics and health management system. Additionally, the program is awaiting the results of a business case analysis of the costs and benefits of various sustainment options. The first phase of that analysis, completed in 2012, found that relying on government personnel for sustainment processes would be less costly than having contractors do the work. The second phase, which began in 2013, is examining multiple aspects of the F-35 sustainment strategy, including the program’s approach to maintenance management and supply chain support. The findings of this second phase are expected to identify affordability opportunities and areas for additional future analysis. Those findings are expected to be provided to the program by March 2014. As the program faces key decisions about its F-35 operation and support strategy reliability is still a significant concern. Our past work has found that weapon system operating and support costs are directly correlated to weapon system reliability, which is something the program can affect. We found that lower reliability causes an imbalance in the relationship between readiness and operating costs, and lends toward the need for high costs to maintain readiness, as seen in figure 3 below. We also previously found that reliability problems identified in DOD weapon systems resulted in cost overruns and schedule delays. DOD and the contractor use various measures to track and improve F-35 reliability, including average flying hours between failures, which is defined as the number of flying hours achieved divided by the number of failures incurred. As indicated in figure 4, the conventional takeoff and landing variant and the short takeoff and vertical landing variant were not meeting expected reliability as of September 2013, while the carrier- suitable variant was performing better than expected. DOT&E’s recent report noted concerns about the program’s ability to achieve its reliability goals by the time each of the F-35 variants reaches maturity—defined as 75,000 flight hours for the CTOL and STOVL variants; and 50,000 flight hours for the CV. DOT&E also noted that the F-35 design is becoming more stable, and although the program still has a large number of flight hours to go until system maturity, additional reliability growth is not likely to occur without a focused, aggressive and well-resourced effort. F-35 manufacturing has improved and the contractor’s management of its suppliers is evolving. As the number of aircraft in production has increased, learning has taken place and manufacturing efficiency has improved. For example, the prime contractor has seen reductions in overall labor hours needed to manufacture the aircraft. The number of F- 35 aircraft produced and delivered annually by the prime contractor has steadily increased since the first low rate production aircraft were delivered in 2011. In 2013, the contractor delivered 35 aircraft to the government, 5 more than it delivered in 2012 and 26 more than it delivered in 2011. The prime contractor has put in place a supplier management system to oversee key supplier performance allowing them to identify poor performers and take appropriate action to address issues such as part shortages and poor quality. According to contractor officials, actions taken as a result of this system contributed to improvements in supplier performance over the past year. The prime contractor continues to gain efficiencies in the manufacturing process as it learns more about manufacturing the aircraft. Reductions in the amount of time spent on work completed out of its specific work station have contributed to overall labor hour reductions. Aircraft delivered in 2012 averaged about 93 hours of out of station work per aircraft, while in 2013 about 8 hours of out of station work were expended per aircraft on average. While these gains in efficiency have moved the program closer to meeting its established labor hour goals, there is still a long way to go. In 2013, the prime contractor was unable to reach labor hour goals for both the CTOL and STOVL variants. By the end of 2014, the prime contractor expects to significantly reduce the average labor hours to produce the CTOL and STOVL variants. However, in order to meet its goal, the program will have to reduce the average number of hours per aircraft for CTOL production by nearly 20,000 and the average number of hours per aircraft for STOVL production by more than 14,000 over the next year. While challenging, this goal appears to be achievable given that the program has been able to reduce the average labor hours for CTOL and STOVL production by more than 20,000 hours annually since 2011. Figure 5 identifies the prime contractor’s trend in reduction of labor hours since the beginning of low-rate initial production as well as the contractor’s plans for 2014. As manufacturing efficiency has improved, the prime contractor has also been able to increase throughput, delivering more aircraft year over year—9 in 2011, 30 in 2012, and 35 in 2013. Over the past year, the prime contractor continued to deliver aircraft closer to contracted delivery dates. Last year we found that deliveries averaged 11 months late, but had improved considerably.deliveries in 2013 averaging about 5 months late. While deliveries in 2013 were later than planned, the trend continued to move in the right direction. Figure 6 tracks the actual delivery dates against the dates specified in the contract. The prime contractor is responsible for managing a complex supply chain made up of a large number of national and international suppliers. Currently, the prime contractor oversees about 1,500 domestic suppliers and 80 international suppliers spread across 11 countries. Figure 7 identifies those countries participating in F-35 production. The number of suppliers has grown significantly over the past three years. Since 2011, 153 new suppliers have been added to the supply chain, 67 of those were added between 2012 and 2013. The prime contractor expects more suppliers to be added as production increases and the program progresses. The prime contractor’s insight into the performance of 52 of its key suppliers through its Supplier Integration Management system has led to actions that have improved supplier performance. The system tracks supplier performance data in 23 areas including, but not limited to, cost growth, parts shortage occurrences, and the number of corrective action reports filed. That data is reviewed and scored on a monthly basis, with each supplier receiving an overall score based on their performance. According to the prime contractor, a score of 80 or above, out of 100, is considered good performance. All 52 of the key suppliers tracked using this system were considered good performers as of December 2013. In addition, 15 showed improvement in performance over the last year. According to contractor officials, the system identifies poor performers who are counseled and corrective actions are identified and implemented. For example, according to officials from the prime contractor, one supplier was identified through this process as having a large number of parts that did not conform to specifications. The prime contractor held meetings with and provided direction to that supplier. As a result of the prime contractor’s actions, the supplier’s performance has improved over the last year. In addition, officials from the prime contractor have identified part shortages—parts that are late to production need—as a major concern. The number of part shortages has slightly increased over the last year. Although the root cause of the shortages is still being assessed, officials from the prime contractor stated they are currently working on ways to improve part availability. Since the F-35 program restructuring was completed in March 2012, acquisition cost and schedule estimates have remained relatively stable, and the program has made progress in key areas. However, persistent software problems have slowed progress in mission systems flight testing, which is critical to delivering the warfighting capabilities expected by the military services. These persistent delays put the program’s development cost and schedule at risk. As a result, DOT&E now projects that the warfighting capabilities expected by the Marine Corps in July 2015, will not likely be delivered on time, and could be delayed as much as 13 months. This means that the Marine Corps may initially receive less capable aircraft than it expects, and if progress in mission systems software testing continues to be slower than planned, Air Force and Navy initial operational capabilities may also be affected. The program may also have to extend its overall developmental flight test schedule, which would increase concurrency between testing and production and could result in additional development cost growth. In addition to software concerns, the current funding plans may be unaffordable, given current budget constraints. This situation could worsen if unit cost targets are not met. Finally, the estimated cost of operating and supporting the fleet over its life-cycle continues to be high and could increase further if aircraft reliability goals are not met. DOD has already made a number of difficult decisions to put the F-35 on a more sound footing. More such decisions may lie ahead. For example, if software testing continues to be delayed, if funding falls short of expectations, or if unit cost targets cannot be met, DOD may have to make decisions about whether to proceed with production as planned with less capable aircraft or to alter the production rate. Also, if reliability falls short of goals, DOD may have to make decisions about other ways to reduce sustainment costs, such as reduced flying hours. Eventually, DOD will have to make contingency plans for these and other issues. At this point, we believe the most pressing issue is the effect software testing delays are likely to have on the capabilities of the initial operational aircraft that each military service will receive. In order to make informed decisions about weapon system investments and future force structure, it is important that Congress and the services have a clear understanding of the capabilities that the initial operational F-35 aircraft will possess. Due to the uncertainty surrounding the delivery of F-35 software capabilities, we recommend that the Secretary of Defense conduct an assessment of the specific capabilities that realistically can be delivered and those that will not likely be delivered to each of the services by their established initial operational capability dates. The results of this assessment should be shared with Congress and the military services as soon as possible but no later than July 2015. DOD provided comments on a draft of this report, which are reprinted in appendix lll. DOD concurred with our recommendation. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; the Commandant of the Marine Corps; and the Director of the Office of Management and Budget. The report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or sullivanm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members making key contributions to this report are listed in appendix IV. Key program event Start of system development and demonstration approved. Primary GAO message Critical technologies needed for key aircraft performance elements are not mature. Program should delay start of system development until critical technologies are mature to acceptable levels. DOD response and actions DOD did not delay start of system development and demonstration stating technologies were at acceptable maturity levels and will manage risks in development. The program undergoes re-plan to address higher than expected design weight, which added $7 billion and 18 months to development schedule. We recommend that the program reduce risks and establish executable business case that is knowledge-based with an evolutionary acquisition strategy. DOD partially concurred but does not adjust strategy, believing that their approach is balanced between cost, schedule and technical risk. Program sets in motion plan to enter production in 2007 shortly after first flight of the non- production representative aircraft. The program plans to enter production with less than 1 percent of testing complete. We recommend program delay investing in production until flight testing shows that JSF performs as expected. DOD partially concurred but did not delay start of production because they believe the risk level was appropriate. Congress reduced funding for first two low- rate production buys thereby slowing the ramp up of production. Progress is being made but concerns remained about undue overlap in testing and production. We recommend limits to annual production quantities to 24 a year until flying quantities are demonstrated. DOD non-concurred and felt that the program had an acceptable level of concurrency and an appropriate acquisition strategy. DOD implemented a Mid- Course Risk Reduction Plan to replenish management reserves from about $400 million to about $1 billion by reducing test resources. We believe new plan increased risks and DOD should revise it to address concerns about testing, management reserves, and manufacturing concerns. We determined that the cost estimate was not reliable and that a new cost estimate and schedule risk assessment is needed. DOD did not revise risk plan or restore testing resources, stating that they will monitor the new plan and adjust it if necessary. Consistent with a report recommendation, a new cost estimate was prepared, but DOD did not conduct a risk and uncertainty analysis. Key program event The program increased the cost estimate and adds a year to development but accelerated the production ramp up. Independent DOD cost estimate (JET I) projects even higher costs and further delays. Primary GAO message Moving forward with an accelerated procurement plan and use of cost reimbursement contracts is very risky. We recommended the program report on the risks and mitigation strategy for this approach. DOD response and actions DOD agreed to report its contracting strategy and plans to Congress and conduct a schedule risk analysis. The program reported completing the first schedule risk assessment with plans to update semi- annually. The Department announced a major program reducing procurement and moving to fixed-price contracts. The program was restructured to reflect findings of recent independent cost team (JET II) and independent manufacturing review team. As a result, development funds increased, test aircraft were added, the schedule was extended, and the early production rate decreased. Costs and schedule delays inhibit the program’s ability to meet needs on time. We recommend the program complete a full comprehensive cost estimate and assess warfighter and IOC requirements. We suggest that Congress require DOD to tie annual procurement requests to demonstrated progress. DOD continued restructuring, increasing test resources and lowering the production rate. Independent review teams evaluated aircraft and engine manufacturing processes. Cost increases later resulted in a Nunn-McCurdy breach. Military services are currently reviewing capability requirements as we recommended. Restructuring continued with additional development cost increases; schedule growth; further reduction in near-term procurement quantities; and decreased the rate for future production. The Secretary of Defense placed the STOVL variant on a two-year probation; decoupled STOVL from the other variants; and reduced STOVL production plans for fiscal years 2011 to 2013. The restructuring actions are positive and if implemented properly, should lead to more achievable and predictable outcomes. Concurrency of development, test, and production is substantial and provides risk to the program. We recommended the program maintain funding levels as budgeted; establish criteria for STOVL probation; and conduct an independent review of software development, integration, and test processes. DOD concurred with all three of the recommendations. DOD lifted STOVL probation citing improved performance. Subsequently, DOD further reduced procurement quantities, decreasing funding requirements through 2016. The initial independent software assessment began and ongoing reviews are planned to continue through 2012. Key program event The program established a new acquisition program baseline and approved the continuation of system development, increasing costs for development and procurements and extending the period of planned procurements by 2 years. Primary GAO message Extensive restructuring places the program on a more achievable course. Most of the program’s instability continues to be concurrency of development, test, and production. We recommend the Cost Assessment Program Evaluation office conduct an analysis on the impact of lower annual funding levels; JSF program office conducts an assessment of the supply chain and transportation network. The program is moving in the right direction but must fully validate design and operational performance and at the same time make the system affordable. We did not make recommendations to DOD. DOD response and actions DOD partially concurred with conducting an analysis on the impact of lower annual funding levels and concurred with assessing the supply chain and transportation network. DOD agreed with GAO’s observations. To assess the program’s ongoing development and testing we reviewed the status of software development and integration and contractor management improvement initiatives. We also interviewed officials from the program office, prime contractor, and the Defense Contract Management Agency (DCMA) to discuss current development status and software releases. In addition, we compared management objectives to progress made on these objectives during the year. We obtained and analyzed data on flights and test points, both planned and accomplished during 2013. We compared test progress against the total program plans to complete. We also reviewed the Director, Operational Test and Evaluation’s annual F-35 assessment. In addition, we interviewed officials from the F-35 program office and aircraft prime contractor to discuss development test plans and achievements. We also collected information from the program office, prime contractor and Department of Defense (DOD) test pilots regarding the program’s technical risks including the helmet mounted display, autonomic logistics information system, carrier arresting hook, and structural durability. To assess the program’s funding and long-term affordability, we reviewed financial management reports, annual Selected Acquisition Reports, and monthly status reports available as of December 2013. In addition, we reviewed total program funding requirements from the Defense Acquisition Executive Summary. We used this data to project annual funding requirements through the expected end of the F-35 acquisition in 2037. We also analyzed the fiscal year 2014 President’s Budget data to identify the current status of unit costs for each variant, and the differences in this cost since 2012. We reviewed the Office of the Secretary of Defense’s F-35 Joint Strike Fighter Concurrency Quick Look Review, and discussed and analyzed reported concurrency costs with the prime contractor and program office. We obtained and discussed the life- cycle operating and support cost through the program’s Selected Acquisition Report and projections made by the Cost Analysis and Program Evaluation (CAPE) office. We identified changes in cost and interviewed officials from the program office prime contractor, Naval Air Systems Command, and the CAPE office regarding reasons for these changes. We also discussed future plans of the DOD and prime contractor to try and reduce life-cycle sustainment costs with officials from the prime contractor, program office, and CAPE. We analyzed reliability data and discussed these issues with program and prime contractor officials. To assess manufacturing progress we obtained and analyzed data related to aircraft delivery rates and work performance data through the end of calendar year 2013. This data was compared to program objectives identified in these areas and used to identify trends. We reviewed data and briefings provided by the program office, prime contractor, and DCMA in order to identify issues in manufacturing processes. We discussed reasons for delivery delays and plans for improvement with the prime contractor. We also toured the prime contractor’s manufacturing facility in Fort Worth, Texas and collected and analyzed data related to aircraft quality through December 2013. We reviewed and discussed information on the prime contractor’s global supply chain including their management processes for oversight. We assessed the reliability of DOD and contractor data by reviewing existing information about the data, and interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. We conducted this performance audit from June 2013 to March 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition the contact name above, the following staff members made key contributions to this report: Travis Masters, Assistant Director; Marvin Bonner; Peter Anderson, Megan Porter, Roxanna Sun and Abby Volk. F-35 Joint Strike Fighter: Restructuring Has Improved the Program, but Affordability Challenges and Other Risks Remain. GAO-13-690T. Washington, D.C.: June 19, 2013. F-35 Joint Strike Fighter: Program Has Improved in Some Areas, but Affordability Challenges and Other Risks Remain. GAO-13-500T. Washington, D.C.: April 17, 2013. F-35 Joint Strike Fighter: Current Outlook Is Improved, but Long-Term Affordability Is a Major Concern. GAO-13-309. Washington, D.C.: March 11, 2013. Joint Strike Fighter: DOD Actions Needed to Further Enhance Restructuring and Address Affordability Risks. GAO-12-437. Washington, D.C.: June 14, 2012. Joint Strike Fighter: Restructuring Added Resources and Reduced Risk, but Concurrency Is Still a Major Concern. GAO-12-525T. Washington, D.C.: March 20, 2012. Joint Strike Fighter: Restructuring Places Program on Firmer Footing, but Progress Is Still Lagging. GAO-11-677T. Washington, D.C.: May 19, 2011. Joint Strike Fighter: Restructuring Places Program on Firmer Footing, but Progress Still Lags. GAO-11-325. Washington, D.C.: April 7, 2011. Joint Strike Fighter: Restructuring Should Improve Outcomes, but Progress Is Still Lagging Overall. GAO-11-450T. Washington, D.C.: March 15, 2011. Tactical Aircraft: Air Force Fighter Force Structure Reports Generally Addressed Congressional Mandates, but Reflected Dated Plans and Guidance, and Limited Analyses. GAO-11-323R. Washington, D.C.: February 24, 2011. Joint Strike Fighter: Assessment of DOD’s Funding Projection for the F136 Alternate Engine. GAO-10-1020R. Washington, D.C.: September 15, 2010. Joint Strike Fighter: Accelerating Procurement before Completing Development Increases the Government’s Financial Risk. GAO-09-303. Washington D.C.: March 12, 2009. Joint Strike Fighter: Recent Decisions by DOD Add to Program Risks. GAO-08-388. Washington, D.C.: March 11, 2008. Joint Strike Fighter: Progress Made and Challenges Remain. GAO-07-360. Washington, D.C.: March 15, 2007. Joint Strike Fighter: DOD Plans to Enter Production before Testing Demonstrates Acceptable Performance. GAO-06-356. Washington, D.C.: March 15, 2006. Tactical Aircraft: Opportunity to Reduce Risks in the Joint Strike Fighter Program with Different Acquisition Strategy. GAO-05-271. Washington, D.C.: March 15, 2005.
The F-35 Lightning II, also known as the Joint Strike Fighter, is DOD's most costly and ambitious acquisition program. The program seeks to develop and field three aircraft variants for the Air Force, Navy, and Marine Corps and eight international partners. The F-35 is integral to U.S. and international plans to replace existing fighter aircraft and support future combat operations. Total U.S. planned investment in the F-35 program is approaching $400 billion to develop and acquire 2,457 aircraft through 2037, plus hundreds of billions of dollars in long-term spending to operate and maintain the aircraft. The National Defense Authorization Act for Fiscal Year 2010 mandated that GAO review the F-35 acquisition program annually for 6 years. In this, GAO's fifth annual report on the F-35, GAO assesses the program's (1) ongoing development and testing, (2) long-term affordability, and (3) manufacturing progress. GAO reviewed and analyzed manufacturing data through December 2013, program test plans, and internal DOD analyses, and spoke with DOD, program, and contractor officials. Delays in developmental flight testing of the F-35's critical software may hinder delivery of the warfighting capabilities the military services expect. F-35 developmental flight testing comprises two key areas: mission systems and flight sciences. Mission systems testing verifies that the software-intensive systems that provide critical warfighting capabilities function properly and meet requirements, while flight sciences testing verifies the aircraft's basic flying capabilities. Challenges in development and testing of mission systems software continued through 2013, due largely to delays in software delivery, limited capability in the software when delivered, and the need to fix problems and retest multiple software versions. The Director of Operational Test and Evaluation (DOT&E) predicts delivery of warfighting capabilities could be delayed by as much as 13 months. Delays of this magnitude will likely limit the warfighting capabilities that are delivered to support the military services' initial operational capabilities—the first of which is scheduled for July 2015—and at this time it is not clear what those specific capabilities will be because testing is still ongoing. In addition, delays could increase the already significant concurrency between testing and aircraft procurement and result in additional cost growth. Without a clear understanding of the specific capabilities that will initially be delivered, Congress and the military services may not be able to make fully informed resource allocation decisions. Flight sciences testing has seen better progress, as the F-35 program has been able to accomplish nearly all of its planned test flights and test points. Testing of the aircraft's operational capabilities in a realistic threat environment is scheduled to begin in 2015. The program has continued to make progress in addressing some key technical risks. To execute the program as planned, the Department of Defense (DOD) will have to increase funds steeply over the next 5 years and sustain an average of $12.6 billion per year through 2037; for several years, funding requirements will peak at around $15 billion. Annual funding of this magnitude clearly poses long-term affordability risks given the current fiscal environment. The program has been directed to reduce unit costs to meet established affordability targets before full-rate production begins in 2019, but meeting those targets will be challenging as significant cost reductions are needed. Additionally, the most recent cost estimate for operating and supporting the F-35 fleet is more than $1 trillion, which DOD officials have deemed unaffordable. This estimate reflects assumptions about key cost drivers the program can control, like aircraft reliability, and those it cannot control, including fuel costs, labor costs, and inflation rates. Reliability is lower than expected for two variants, and DOT&E reports that the F-35 program has limited additional opportunities to improve reliability. Aircraft manufacturing continued to improve in 2013, and management of the supply chain is evolving. As the number of aircraft in production has increased, critical learning has taken place and manufacturing efficiency has improved. For example, the prime contractor has seen reductions in overall labor hours needed to manufacture the aircraft, as expected. In 2013, the contractor delivered 35 aircraft to the government, 5 more than it delivered in 2012 and 26 more than it delivered in 2011. The prime contractor has put in place a supplier management system to oversee key supplier performance. GAO recommends that DOD assess and identify the specific capabilities that realistically can be delivered to the military services to support their respective initial operational capabilities, and share its findings with the Congress and military services prior to July 2015. DOD concurred with this recommendation.
Before 1978, the U.S. airline industry was tightly regulated. The federal government controlled what fares airlines could charge and what cities they could serve. Concerned that government regulation had made the industry inefficient, inhibited its growth, and caused airfares to be too high in many heavily traveled markets involving the nation’s largest communities, the Congress passed the Airline Deregulation Act of 1978. The act phased out the government’s control of fares and service but did not change the government’s role in regulating and overseeing air safety. Opponents of economic deregulation warned that relying on competitive market forces to determine the price, quantity, and quality of domestic air service could adversely affect safety and harm the economies of smaller communities. In 1990, both GAO and the Department of Transportation (DOT) reported that fares had fallen since deregulation at airports serving small and medium-sized communities as well as at airports serving large communities. Studies by DOT and others have differed in their conclusions about deregulation’s impact on airline service and safety. Between 1938 and 1978, the Civil Aeronautics Board (CAB) regulated the airline industry, controlling the fares airlines could charge and the markets they could enter. Legislatively mandated to promote and develop the air transportation system, CAB believed that passengers traveling shorter distances—more typical of travel from small and medium-sized communities—would not choose air travel if they had to pay the full cost of service. Thus, in keeping with its mandate, CAB set fares relatively lower in short-haul markets and higher in long-haul markets than would be warranted by costs. In effect, long-distance travel subsidized short-distance markets. In addition, CAB did not allow new airlines to form and compete against the established carriers. Concerned that these practices had, among other things, caused fares to be too high in many markets, the Congress passed the Airline Deregulation Act, which the President signed into law on October 24, 1978. The act phased out CAB’s control of domestic air service and placed reliance on competitive market forces to decide fares and service levels. As a result, fares were expected to fall at airports serving large communities, from which many trips are long-distance over heavily traveled routes. However, without the cross-subsidy present under regulation, fares were expected to increase somewhat at airports serving small and medium-sized communities. In addition, it was expected that airlines, free to make their own decisions concerning service, would stop flying to some smaller communities where they could not make a profit and replace jets with smaller turboprop (propeller) aircraft in others because those communities could not economically support jet service. In 1989, the then-Chairman, Senate Committee on Commerce, Science, and Transportation, concerned that people traveling to and from small and medium-sized communities could be paying higher fares as a result of airline deregulation, asked us to compare the trends in airfares at airports serving small and medium-sized communities with the trend for airports serving large communities. Contrary to the Chairman’s expectation, however, we found that the real (adjusted for inflation) fare per passenger mile was 9 percent lower in 1988 than in 1979 at airports serving small communities, 10 percent lower at airports serving medium-sized communities, and about 5 percent lower at airports serving large communities. Fares had declined at 76 of the 112 airports we reviewed (68 percent), including 38 of the 49 airports serving small communities (78 percent). Nevertheless, airports in several small, medium-sized, and large communities experienced increases in fares of over 20 percent. We noted that the greatest fare increases tended to be in the Southeast, while the largest fare decreases were in the Southwest. In addition to this study, we have reported on several other issues concerning airfares since deregulation, including the effects of market concentration and the industry’s operating and marketing practices on fares. These reports are listed at the end of this report. “Smaller cities have benefited from the shift to hub and spoke service. Most small cities receive more frequent service than previously, and many now receive service to connecting hubs from more than one major airline or their affiliates, thereby providing the traveler with a choice of airlines and routings to most destinations.” Many other studies have been conducted of deregulation’s impact on airfares and service. While generally concluding that fares overall have declined, the studies have reached different conclusions about the impact on the quantity and quality of service. For example, Morrison and Winston estimated that the lower fares since deregulation save passengers $12.4 billion annually. They also estimated that because of the (1) increased number of flights, (2) efficiencies of the hub-and-spoke networks in connecting smaller communities to the overall aviation system, and (3) resulting savings in travel time, passengers save an additional $10.3 billion a year as a result of deregulation. While other studies generally agree that fares have decreased since deregulation, they point out that the lower fares may have been achieved at the cost of reduced service quantity and quality for many smaller and medium-sized communities and that therefore the overall net benefits of deregulation are less clear. Brenner, for example, concluded that service quality has declined for small and medium-sized communities, largely because his research showed that a number of very small communities have lost air service completely and that many small and medium-sized communities are served mostly or entirely by turboprops, as opposed to the jet service they had under regulation. Extensive research has also been conducted on the impact of deregulation on air safety. This body of work commonly acknowledges that since deregulation, the rate of accidents has continued its historic decline. Figure 1.1 shows the sharp decline in the number of airline accidents per million aircraft miles flown since 1960. Although the rate of improvement has slowed in recent years as the number of accidents each year has grown very small, the accident rate for airlines in 1994 (0.004 accidents per million aircraft miles flown) was half the rate in 1978 (0.008 accidents per million aircraft miles flown). Preliminary data for 1995 indicate that the rate increased somewhat, although it remained below the rate in 1978. A study committee sponsored by the National Research Council concluded that the decline in the accident rate has largely been a result of the (1) introduction in the 1960s of more advanced, “second generation” jet aircraft into the U.S. fleet (such as the 727, 737-200, and DC-9) in place of the first generation of jets introduced in the late 1950s (such as the 707 and DC-8) and (2) subsequent advancements in aircraft technology, air traffic control procedures, and pilot training. The committee found little evidence to support concerns that deregulation had negatively affected air safety in general or safety for travelers from small and medium-sized communities in particular. Nevertheless, others have come to different conclusions, holding that deregulation has prevented further gains in safety because the increased competitive pressures brought by deregulation have forced airlines to limit spending on maintenance. Rose, for example, demonstrated some correlation between lower profitability and higher accident rates, particularly for smaller airlines. Many of these researchers also believe that for smaller communities, air safety has decreased since deregulation because substituting commuter carriers and turboprops, which have higher accident rates, for larger airlines and jet aircraft at these airports has increased those communities’ accident risk. Although the accident rate for commuter carriers fell by 93 percent between 1978 and 1995 (from 0.270 to 0.019 accidents per million aircraft miles flown), these researchers note that the accident rate for these carriers in 1995 was still more than three times higher than the rate for the larger airlines. Nevertheless, research has been inconclusive to date on whether the increased presence of commuter airlines and turboprops has resulted in more accidents at airports serving small communities. Noting that several years had passed since our comparison of airfares at airports serving small, medium-sized, and large communities, the Chairman, Senate Committee on Commerce, Science, and Transportation, asked us to update our work and to determine whether the regional differences in airfare trends that we previously observed still existed. In addition, expressing concern that deregulation may have adversely affected small and medium-sized communities to the extent that airlines eliminated service or replaced jets with turboprops and noting that opinions differed on this subject, the Chairman requested that we compare the changes in the quantity, quality, and safety of air service since deregulation for airports serving small, medium-sized, and large communities. In updating our prior comparison of airfares, we analyzed data on fares for the same 112 airports that we had reported on previously. Specifically, we examined the trends in the average yields—fares per passenger mile—between 1979, 1984, 1988, 1991, 1994, and the first half of 1995 for travel out of 49 airports serving small communities, 38 airports serving medium-sized communities, and 25 airports serving large communities. In 1994, these airports accounted for 4.7 million (66 percent) of the 7.1 million domestic airline departures and 320.6 million (67 percent) of the 481.7 million domestic airline enplanements in the United States. In our prior report, we examined the trends using fare data for 1979, 1984, and 1988 for these communities. We updated these trends using data for 1991 and 1994 because (1) 1991 represented the mid-point between 1988 and 1994 and (2) the 1994 fare data were the most current full-year data available at the time of our review. The data for the first 6 months of 1995 provided us with the most current data available. To provide consistent, comparable information, we identified and used the same routes (origin and destination airport combinations) that we reviewed in our prior work. We also adjusted the fare data for inflation, using the consumer price index, so that the fares in each of the years reflect 1994 dollar values. As in our previous study, we used DOT’s “Passenger Origin-Destination Survey” (O&D Survey). The O&D Survey contains data reported quarterly to DOT by airlines from a 10-percent sample of all tickets sold. Because the estimate of the fare per passenger mile is developed from a statistical sample, it has a sampling error. The sampling error is the maximum amount by which the estimate obtained from the sample can be expected to differ from the actual fare per passenger mile if the entire universe of tickets were examined. Each sampling error was calculated at the 95-percent confidence level. This means the chances are 19 out of 20 that if we reviewed all tickets purchased, the results would differ from the estimate obtained from our sample by less than the sampling error. (App. II provides estimates of fares, and app. III provides the sampling error for each of these estimates.) To determine why regional differences in airfares may exist, we analyzed DOT’s data on airline market shares at each of the 112 airports and discussed with DOT analysts and airline representatives how the presence of different carriers may affect fares. To determine the extent to which economic changes could explain any observed regional differences, we analyzed data provided by the Bureau of Economic Analysis on economic growth between 1979 and 1993, which was the latest year for which data were available, for each of the 112 communities served by the airports we reviewed. Appendix VII provides additional details on the scope and methodology of our analyses of airfares. To compare changes in the quantity of air service since deregulation at airports serving small, medium-sized, and large communities, we analyzed data for our 112 airports for May 1978 and May 1995 from the Official Airline Guide (OAG), a privately published list of all scheduled commercial flights. Specifically, we documented changes in the total number of departures as well as the total number of available seats for each airport. We examined data from 1978 because they provided information on air service before deregulation and data from 1995 because they were the latest available at the time of our review. We chose May to avoid the typical seasonal airline schedule changes that occur in the winter and summer months. We used the OAG as our primary data source because DOT’s database on total annual departures by airport contains only the data reported by the airlines that operate aircraft with more than 60 seats. As a result, DOT’s data on airport operations do not provide information on departures by commuter carriers or air taxis. However, we analyzed DOT’s data on annual departures by the larger airlines and the Federal Aviation Administration’s (FAA) estimates of annual commuter and air taxi departures for each airport to confirm the results of our analyses of the OAG data. To compare changes in the quality of air service since deregulation at airports serving small, medium-sized, and large communities, we analyzed the OAG data described above for the 112 airports in our sample. Specifically, for each airport we calculated the changes in a number of indicators of service quality, including the number of destinations served by nonstop and one-stop flights and the percentage of jet departures. We then summarized these calculations for the three airport groups and compared the trends in the various quality indicators to gain an overall perspective on how service quality has changed. We did not, however, develop a formula that would weight these indicators and provide an overall “quality score” for each airport because developing such weights requires subjective judgments of the relative importance of each indicator. To compare the trends in the safety of air service since deregulation at small, medium-sized, and large community airports, we analyzed National Transportation Safety Board (NTSB) data on airline, commuter, and air taxi accidents (1) that occurred at or near each of the airports in our sample and (2) for which the airport in our sample was the origin or destination of the flight. Using these data, we calculated accident rates per 100,000 departures for each airport from 1978 through 1994. We then calculated the overall rate for each of the three airport groups. We discussed a draft of this report with senior DOT officials, including the Director, Office of Aviation and International Economics. They agreed with our findings concerning the trends in airfares, service, and safety since deregulation and suggested no revisions to the report. Additional details on their comments and our response are provided at the end of chapter 3. We conducted our review from August 1995 through March 1996 in accordance with generally accepted government auditing standards. Overall, airfares, adjusted for inflation, have declined since deregulation at airports serving small, medium-sized, and large communities. The largest reductions have occurred at airports located in the West and Southwest, regardless of the community’s size. Increased competition, stimulated largely by the entry of low-cost, low-fare airlines at these airports, has been a key factor in the decline in fares. By contrast, some airports in our sample, particularly those serving small and medium-sized communities in the Southeast and Appalachia, have experienced large increases in fares since deregulation. At these airports, one or two larger, higher-cost carriers account for the vast majority of passenger enplanements. Until very recently, these airlines have faced relatively little competition, particularly from low-cost new entrant airlines. The geographic disparity in airfare trends also stems from several adverse factors, such as airport congestion and poor weather conditions, that contribute to higher costs and are more prevalent in the eastern United States. Over 5 years ago, we reported that real airfares (adjusted for inflation) had fallen between 1979 and 1988 not only at airports serving large communities, as was expected, but also at airports serving small and medium-sized communities. As figure 2.1 shows, real fares through the first 6 months of 1995 for all three airport groups remained lower than they were in 1979. When full-year data for 1979 and 1994 are compared, fares were 8.5 percent lower at airports serving small communities, 10.9 percent lower at airports serving medium-sized communities, and 8.3 percent lower at airports serving large communities. However, as figure 2.1 also shows, since 1988 fares have risen slightly at airports serving small and medium-sized communities and fallen slightly at airports serving large communities. As figure 2.1 also shows, despite the overall trend toward lower fares since deregulation, fares at small- and medium-sized-community airports have been consistently higher than fares at large-community airports. It is generally accepted that fares tend to be lower at large-community airports because of the economies associated with traffic volume and trip distance. As the volume of traffic and average length of the trip increase, the average cost per passenger mile decreases, allowing for lower fares. Airports serving small and medium-sized communities tend to have fewer heavily traveled routes and shorter average trip distances, resulting in higher average costs and higher fares per passenger mile than those of large-community airports. Nevertheless, fares fell following deregulation for most of the airports that we reviewed. (App. I provides a summary of the overall changes in both fares and service at the airports in our review, and app. II shows the specific fare trends at each airport.) Of the 112 airports in our sample, 73 airports experienced a decline in fares. Specifically, fares declined at 36 of the 49 airports serving small communities, 19 of the 38 airports serving medium-sized communities, and 18 of the 25 airports serving large communities. The general trend toward lower fares has largely resulted from increased competition. Between the onset of deregulation and 1994, the average number of large airlines competing at the small-community airports that we reviewed increased from 1.8 to 2.8, and the average number of commuter carriers increased from 2.5 to 4.5. Similarly, the average number of large airlines competing at airports serving medium-sized communities increased from 2.8 to 4.3, and the average number of commuter carriers increased from 3.3 to 4.6. Finally, the average number of large airlines competing at the large-community airports that we reviewed increased from 9.0 to 11.2, although the number of commuter carriers decreased from 11.3 to 6.4. In addition, the transition to hub-and-spoke systems since deregulation has increased competition at many airports serving small and medium-sized communities. By bringing passengers from multiple origins (the spokes) to a common point (the hub) and placing them on new flights to their ultimate destinations, these systems provide for more frequent flights and more travel options than did the direct “point-to-point” systems that predominated before deregulation. Thus, instead of having a choice of a few direct flights between their community and a final destination, travelers departing from a small community might now choose from among many flights from several airlines through different hubs to that destination. While real fares fell at the majority of airports, fares rose—in some cases substantially—for 33 of the 112 airports. Table 2.1 shows the five airports of those we reviewed that had the largest fare decreases and the five airports with the largest fare increases. As table 2.1 indicates, those airports experiencing the largest increases in fares serve small and medium-sized communities and have had a decrease or little change in the number of large airlines and commuter carriers. Conversely, the airports experiencing the largest decrease in fares since deregulation have had a substantial increase in the number of large airlines and, to a lesser extent, an increase in the number of commuter carriers. Since deregulation, the largest decreases in fares have occurred at airports in the West and Southwest, and the largest increases in fares have occurred at airports in the Southeast and Appalachian region. In the West and Southwest, fares have declined largely because of increased competition caused by the entry of new airlines, particularly low-cost airlines such as Southwest and Reno Air. Over the last decade, high economic growth, relatively little airport congestion, and more favorable weather conditions have attracted these airlines to serve western airports. By contrast, competition at airports serving the Southeast and Appalachia has been more limited because (1) low-cost carriers have generally avoided the East because of its slower growth, airport congestion, and harsher weather and (2) one or two relatively high-cost carriers have dominated the routes to and from these airports. Although during 1994 one low-cost airline initiated operations in the East and subsequently failed, other low-cost airlines, such as Valujet, have emerged to compete with the higher-cost carriers in some eastern markets. However, data are not yet available to determine the extent to which these low-cost carriers have affected fare trends in the East. As figure 2.2 shows, the airports in our sample that experienced the largest fare decreases following deregulation are predominantly located in the West and Southwest. These substantial declines in real fares were experienced by airports serving large communities as well as by those serving small and medium-sized communities. Of the 15 airports in our sample for which fares declined by more than 20 percent between 1979 and 1994, 5 serve small communities, 5 serve medium-sized communities, and 5 serve large communities. By contrast, the largest fare increases occurred at airports that serve small and medium-sized communities in the Southeast and Appalachia (see fig. 2.2). Of the eight airports for which fares have increased by more than 20 percent since 1979, three serve small communities, four serve medium-sized communities, and one serves a large community. Over the last 17 years, a number of new airlines with very low operating costs—including America West, American Trans Air, Markair, Morris Air, Reno Air, and Southwest—have begun interstate air service, primarily concentrating their operations in the West. These low-cost airlines have focused on the West because of that region’s higher economic growth rates, lesser airport congestion, and more favorable weather. Because of their style of service—high frequency between a limited number of city-pairs and few amenities—these airlines have operating costs that are about 30 percent lower than those of larger airlines such as American and United. As a result, these low-cost airlines are able to charge lower fares. Further downward pressure on fares is caused by the competitive responses of the larger carriers. To date, these responses have ranged from substantial fare cuts in the case of Northwest to the creation by United in late 1994 of a low-cost “airline within an airline”—called Shuttle by United—to compete with Southwest in key markets on the West Coast. We found the presence of low-cost carriers and the resulting increase in competition to be a common factor at the airports in our sample that have experienced the largest fare decreases since deregulation. In 1994, low-cost airlines accounted for at least 10 percent, and often much more, of the total enplanements at 14 of the 15 airports that experienced the largest decreases in fares (see table 2.2). In part, these low-cost competitors have been attracted by the relatively strong economic growth at the communities these airports serve. Between 1979 and 1993, the average annual growth in population, personal income, and employment at these 15 communities substantially exceeded that for the other 97 communities in our sample (see table 2.3). In particular, low-cost airlines have been attracted to the area of strongest economic growth: the Southwest. For example, in Phoenix, Arizona—where fares have fallen by 32 percent since deregulation—the average annual growth in population between 1979 and 1993 was 3.0 percent; in personal income, 3.7 percent; and in employment, 3.7 percent. Moreover, for rapidly growing Las Vegas, Nevada—where fares also fell by 32 percent—the average annual rate of growth exceeded 5.0 percent for all three measures. By contrast, the largest fare increases occurred in the Southeast and Appalachia, where competition has been lacking and economic growth has been comparatively slower. At all eight airports where fares increased by more than 20 percent, Delta and USAir—airlines that have historically had among the highest operating costs in the industry—accounted for the overwhelming majority of enplanements in 1994 (see table 2.4). In part, there has been little new entry at these eight airports because of the slower growth rates for the communities these airports serve. The average annual rates of growth during this period were only 0.1 percent for population, 1.3 percent for personal income, and 0.9 percent for employment. Overall, the average airfare rose slightly during the first 6 months of 1995 compared with 1994 at all three categories of airports. At small-community airports, real fares rose by 2.6 percent; at medium-sized-community airports, by 2.1 percent; and at large-community airports, by 2.5 percent. Despite these increases, 59 of the 112 airports in our sample continued to have lower real airfares than they had in 1979. Specifically, when the data on the first half of 1995 were factored in, real fares since deregulation were lower at 28 of the 49 small-community airports, 17 of the 38 medium-sized-community airports, and 14 of the 25 large-community airports. The largest fare increases during the first 6 months of 1995 occurred in the East, primarily at small- and medium-sized communities in North Carolina and South Carolina. These fare increases occurred largely because of a loss of competition. In early 1994, Continental Airlines created a separate, low-cost service in the East similar to the operations of the low-cost carriers in the West and Southwest. Continental’s service—commonly referred to as “Calite”—failed and was terminated in early 1995. As table 2.5 shows, all 10 airports that experienced the largest fare increases between 1994 and the first 6 months of 1995 were either served by Calite during 1994 or located near an airport served by Calite. According to DOT analysts and Continental representatives, the termination of Calite service at three airports—Greensboro/High Point, North Carolina; Charleston, South Carolina; and Greenville, South Carolina—greatly lessened overall price competition in the geographical area within about 100 miles of those airports. As a result of the higher fares caused by the loss of Calite service or nearby competition from Calite, the trend toward lower fares since deregulation was reversed at all but 1 of the 10 airports (see table 2.5). According to Continental’s representatives, Calite failed largely because the airline could not successfully compete against the dominant positions of Delta and USAir. Other airline representatives claimed that Calite overextended itself by growing too fast and by attempting to challenge Delta and USAir in too many markets. Since the demise of Calite, however, several other low-cost carriers, such as Valujet and Kiwi, have initiated service in the East. Some industry observers believe that these airlines might succeed because they have focused on a smaller number of markets than Calite did. The most successful of these low-cost carriers to date has been Valujet. After starting service in late 1993 with two airplanes serving three routes, Valujet has grown to 41 aircraft, as of December 1995, serving 25 cities from Atlanta and 11 cities from Washington, D.C. In 1995, it had an operating profit of $107.8 million and an operating profit margin of 29 percent, compared with 9 percent for Delta and 6 percent for both American and United. However, Valujet has begun to experience some of the problems of operating in the East. For example, in late 1995 Valujet was unable to obtain take-off and landing slots at New York’s congested LaGuardia Airport. As a result, it could not begin its planned low-cost, low-fare service between New York and Atlanta. Valujet’s growth has sparked competitive responses from the dominant airlines in the East. Delta, for example, plans to create a separate, low-cost operation of its own in the East starting in mid- to late 1996. However, largely because (1) most of Valujet’s growth occurred in the second half of 1995 and (2) the competitive responses of other airlines are only beginning to unfold, data are not yet available to determine the extent to which Valujet has affected fares in the East. Overall, the quantity of air service has increased since deregulation at small-, medium-sized, and large-community airports. The largest growth has occurred at large-community airports. Not all the airports that we reviewed, however, shared in this general trend toward more air service. Some airports—particularly those serving small and medium-sized communities in the Upper Midwest—have less air service today than they did under regulation. Measuring the overall quality of air service is more problematic because there are many dimensions of “quality” and not everyone agrees on the relative importance of each. In general, the factors that are usually considered to be the primary factors in service quality suggest that for small and medium-sized communities the results are mixed. For large communities, on the other hand, the trends are less ambiguous and quality has improved in almost every dimension. Finally, the safety of air service has generally improved since deregulation at all three categories of airports. Indeed, because so few accidents occur each year, an increase of just one or two accidents in a given year can cause significant fluctuation in the accident rate for any one airport group, making it difficult to reach conclusions about relative safety between the groups. The total number of scheduled commercial departures, which is an important measure of the amount of air service at an airport, has increased for all three airport groups in our sample (see fig. 3.1). Specifically, in May 1995 small-community airports as a group had 50 percent more scheduled commercial departures than they did in May 1978; medium-sized-community airports had 57 percent more departures; and large-community airports had 68 percent more departures. Within each of the three airport groups, a substantial majority of airports had more scheduled commercial departures in May 1995 than in May 1978. Seventy-eight percent of the small- and medium-sized-community airports had an increase in the number of departures, and every large-community airport in our sample had more departures. A second measure of the quantity of air service—the number of available seats—has also increased since deregulation for all three airport groups. (App. IV provides data on departures and available seats for each airport.) However, because of the increased use of smaller, turboprop aircraft, the percentage change in available seats has been less than the percentage change in the number of departures, especially at small- and medium-sized-community airports. (See fig 3.2.) In addition, because of the substitution of turboprops for jets, many small- and medium-sized-community airports have experienced a decrease in the number of available seats even though the number of departures increased. For example, because the average aircraft size per departure at Fargo, North Dakota’s airport decreased from 106 seats in 1978 to 67 seats in 1995, Fargo had 21 percent fewer available seats in May 1995 than in May 1978 even though the number of departures increased by 25 percent. Nevertheless, as table 3.1 shows, when both measures are considered, a plurality of the small- and medium-sized-community airports and every large-community airport have experienced an increase in the quantity of air service they receive. The airports that have experienced an increase in the quantity of air service are located throughout the country. Large communities in particular have experienced an increase in service quantity, in part because of their relatively strong economic growth during this period. For example, between 1979 and 1993, the average annual income growth for the large communities was 2.2 percent, compared with 1.8 percent for both the small and medium-sized communities in our sample. On the other hand, the 17 airports that have experienced an decrease in both departures and seats are primarily small- and medium-sized-community airports located in the Upper Midwest, where economic growth has been slower. Figure 3.3 demonstrates the widespread increase in service quantity since deregulation and identifies where the sharpest decline in air service—a decline of at least 20 percent—has occurred. The three communities whose airports have experienced the sharpest declines—Sioux Falls, South Dakota; Lincoln, Nebraska; and Rochester, Minnesota—had relatively slow economic growth during this period. For these three communities, the average annual growth rate was only 0.4 percent in population, 1.3 percent in personal income, and 1.4 percent in employment. The quality of air service a community receives is generally measured by four variables: the number of (1) departures and available seats, (2) destinations served by nonstop flights, (3) destinations served by one-stop flights and the efficiency of the connecting service, and (4) jet departures compared with the number of turboprop departures. Largely because of their central role in hub-and-spoke networks, large-community airports have experienced a substantial increase in the number of departures and cities served via nonstop flights since deregulation, a corresponding decrease in the number of cities served by one-stop flights, and only a slight decline in the share of departures involving jets. For small- and medium-sized-community airports, hub-and-spoke networks have resulted in more departures and more and better one-stop service. However, because much of this service is to hubs via turboprops, small and medium-sized communities have fewer destinations served by nonstop flights and relatively less jet service. In light of this mixed record, it is difficult to judge the overall change in the quality of air service at airports serving small and medium-sized communities because such an assessment requires, among other things, a subjective weighting of the relative importance of the four variables. As discussed earlier, the number of departures has increased since deregulation at airports serving small and medium-sized communities. However, airlines have generally directed these departures to hub airports, often eliminating nonstop service to other small and medium-sized communities. Overall, we found that the average number of cities served by nonstop flights has declined by 7 percent from small-community airports and by 2 percent from medium-sized community airports (see fig. 3.4). However, because more flights from these airports are destined for hubs, the number of destinations served on a one-stop basis has increased by 9 percent at small-community airports and by 26 percent at medium-sized-community airports. As figure 3.4 also shows, large-community airports, many of which serve as hubs, have experienced a sizable increase since deregulation in the number of nonstop destinations. As a result, large communities’ need for one-stop service has decreased. The number of nonstop destinations has decreased at many airports serving small and medium-sized communities: 55 percent of the small-community airports and 42 percent of the medium-sized-community airports have experienced decreases. As figure 3.5 shows, the small- and medium-sized-community airports experiencing the sharpest decline in nonstop destinations were primarily located in the slower-growing Upper Midwest and Southeast. In some cases, the communities served by these airports have contracted. For example, Moline, Illinois’ average annual change in population between 1979 and 1993 was –0.5 and Bristol, Tennessee’s was –0.1. By contrast, those airports experiencing the largest increases in the number of nonstop destinations are located primarily in fast-growing cities in the Southwest and Florida as well as in Upper New England, such as Burlington, Vermont. For many small and medium-sized communities, the decline in nonstop service options has been substantial. For example, as shown in figure 3.6, the number of cities served nonstop from Fayetteville, North Carolina, decreased by 78 percent, from nine in May 1978 to two in May 1995. Nevertheless, most communities that experienced a decline in the number of nonstop destinations experienced an increase in the number of one-stop destinations. This increase largely occurred because the remaining cities served on a nonstop basis are often hubs for the major airlines, thereby yielding a significant increase both in the number of connections possible and the efficiency of that service. For example, the two destinations served nonstop from Fayetteville in 1995—Atlanta and Charlotte—are hub airports for Delta and USAir, respectively. As a result, the number of destinations served on a one-stop basis from Fayetteville, as listed in the OAG, increased by 60 percent between May 1978 and May 1995. Moreover, we found that passengers flying from places like Fayetteville were better connected to the entire domestic aviation system in 1995 than they were in 1978. For example, travelers from Fayetteville had an average of nine daily flights to Atlanta and six daily flights to Charlotte in May 1995, compared with three daily flights to Atlanta and one daily flight to Charlotte in May 1978. This increased frequency of service expands passengers’ choices and reduces layover times between connections. As figure 3.7 illustrates, a traveler from Fayetteville wanting to fly to San Francisco in 1978 had no other choice but to fly through Atlanta. The passenger could take a morning, noon, or mid-afternoon flight from Fayetteville to Atlanta and then take one of two flights from Atlanta to San Francisco. However, because the first flight from Fayetteville to Atlanta did not arrive until 9:27 a.m. and both flights from Atlanta to San Francisco were in the morning (the first flight leaving at 8:46 a.m. and the second at 10:25 a.m.), the passenger had only one real connection option. Otherwise, the person had to spend the night in Atlanta to catch the next morning’s flight to San Francisco at 8:46 a.m. In 1995, that same traveler from Fayetteville could fly to San Francisco via either Atlanta or Charlotte. The passenger would have the choice of nine daily flights to Atlanta connecting to six daily flights to San Francisco or six daily flights to Charlotte connecting to three daily flights to San Francisco (see fig. 3.7). For example, the passenger could take a flight from Fayetteville to Atlanta that arrives at 7:25 a.m. and connect to a flight to San Francisco that leaves Atlanta at 8:20 a.m. Because of the increased service frequency, during any given day in May 1995 the passenger would have six real connection options at Atlanta, with an average layover time of 82 minutes. The passenger also had the option of taking one of two night flights from Fayetteville to Atlanta, spending the night in Atlanta, and catching the next morning’s flight to San Francisco at 8:20 a.m. Finally, as figure 3.8 shows, Fayetteville’s access to the domestic system has been expanded in terms of the geographic location of the cities accessible through one-stop service. For example, in 1978 Fayetteville had possible one-stop connecting service to six different cities in West Virginia but no such service to such larger cities as San Diego, California; Salt Lake City, Utah; and Seattle, Washington; or such preferred vacation locations as Honolulu, Hawaii, or St. Thomas, Virgin Islands. As a result of the hub-and-spoke system, Fayetteville in 1995 had one-stop service to those cities as well as one-stop service to four cities in West Virginia. While the number of jet departures has declined slightly at small-community airports and increased slightly at medium-sized- community airports, the proportion of departures involving jets has fallen substantially for both groups since deregulation, as shown in fig. 3.9. At small-community airports, the percentage of departures involving jets fell from 66 percent in May 1978 (21,632 of 32,744 total departures) to 39 percent in May 1995 (18,968 of 48,960 total departures). As a result, the growth in turboprop departures accounted for all of the growth in total departures since deregulation at the small-community airports that we reviewed. At airports serving medium-sized communities, the percentage of departures involving jets fell from 77 percent in May 1978 (31,126 of 40,561 total departures) to 56 percent in May 1995 (35,554 of 63,854 total departures). By comparison, at large-community airports, the number of jet departures increased by 47 percent, although with the growing use of turboprops the share of departures involving jets actually fell from 81 percent of all departures in May 1978 to 71 percent in May 1995. We found that the substantial growth in the use of turboprops since deregulation has occurred at airports serving small and medium-sized communities in all regions of the country. Two factors have caused this trend. First, large airlines have used turboprops to link small and medium-sized communities to their major hubs. Airlines would be unable to earn a profit on many of these routes if they deployed jets, which are larger and more costly to operate than turboprops. Second, since 1978 the commuter and air taxi segments of the industry have grown significantly.Commuters, in particular, have emerged as (1) affiliates of the large airlines to “feed” traffic traveling from small and medium-sized communities to the airlines’ hubs and (2) key providers of air service between small and medium-sized communities. DOT’s data on total departures in 1978 and 1994 by large airlines at the airports in our sample and FAA’s estimates of commuter and air taxi departures at those airports demonstrate the growth of the commuter and air taxi segments of the industry. Our analysis of these data shows that commuter carriers and air taxis accounted for 56 percent of departures at small-community airports in 1994, compared with 29 percent in 1978. At medium-sized-community airports, commuter carriers and air taxis accounted for 47 percent of departures in 1994, compared with 25 percent in 1978. Finally, at large-community airports, commuter carriers and air taxis accounted for 27 percent of departures in 1994, compared with 18 percent in 1978. In evaluating overall changes in the quality of air service to small and medium-sized communities since deregulation, the increased service frequency and one-stop options must be weighed against the decline in jet service and nonstop options. While the substantial gains in quantity and nonstop destinations for large-community airports clearly outweigh the corresponding decline in one-stop service and slight decrease in jet service relative to turboprops, weighting the changes experienced by small and medium-sized communities is more problematic for two reasons. First, the value placed on each factor depends on a subjective determination that will vary by individual. For example, DOT analysts we interviewed stated that in their view the number of departures was the most important factor because the increase in flight frequency saves travelers time and increases their possible connections. These analysts noted that they believed that the type of aircraft was the least important factor, largely because the size and safety of turboprops and the service they provide have improved dramatically over the last 17 years. Thus, they believe that turboprops provide a level of service equivalent in many cases to that of jets. Other industry analysts that we interviewed, however, considered the loss of nonstop service to be the most important change. Second, it is not possible to convert each factor into a common measure, such as total travel time. Although most of the factors can be measured in terms of travel time, one cannot: the perceived levels of amenities and comfort that travelers associate with the different types of turboprops and jets. As a result, developing a formula that combines the various factors to produce a single, objective “quality score” is problematic. The only such formula that we identified during our review was developed in the 1960s by the Civil Aeronautics Board (CAB). The CAB’s formula was weighted heavily toward changes in the number of departures and did not account for passengers’ perceptions of the service quality associated with the various types of jets and turboprops. In providing us with this formula, DOT analysts emphasized that it has never been updated and should not be used to gauge changes in service quality since deregulation. We therefore declined to use it and did not attempt to develop a new formula during our review. Nevertheless, when considering those airports in our sample that had either (1) lower fares and positive changes in every quality dimension or (2) higher fares and negative changes in every quality dimension, clear geographical differences emerge. In particular, as figure 3.10 shows, fast-growing communities of all sizes in the West, Southwest, Upper New England, and Florida have lower fares and better service. Nevertheless, as figure 3.10 also shows, some small and medium-sized communities in the Southeast and Upper Midwest are clearly worse off today. These pockets of higher fares and worse service stem largely from both a lack of competition and comparatively slow economic growth over the past two decades. (App. I provides an overall summary of the changes in fares and service at each airport in our sample.) In general, the long-term decline in the rate of accidents has continued since deregulation. These safety gains are attributed to advances in aircraft technology and improved pilot training in the early and mid-1980s, especially for turboprops and commuter carriers. As noted in chapter 1, the overall accident rate for commuters has fallen by over 90 percent since deregulation. In our sample, the rate of accidents at the airports in each group was lower in 1994 than in 1978. At small-community airports, the rate fell from 0.47 accidents per 100,000 departures to 0.14 accidents per 100,000 departures in 1994. At medium-sized-community airports, the rate fell from 1.29 accidents per 100,000 departures in 1978 to 0.00 in 1994. At large-community airports, the rate fell from 0.41 accidents per 100,000 departures to 0.14 in 1994. However, because there are so few accidents each year, an increase of just one or two accidents in a given year can cause a significant fluctuation in the accident rates, as figure 3.11 shows. Attempts to discern trends between the airport groups by smoothing the data—employing, for example, such common practices as calculating a 3-year moving average—did not help to identify any trends. Our analysis of accidents on routes to and from the airports in our sample were similarly inconclusive. Thus, while commuter carriers and turboprops generally do not have as good a safety record as the larger jets they replaced in many markets serving small and medium-sized communities, it is difficult to discern the impact of the change on relative safety at the airports in our sample because of the small number of annual accidents and the consequent wide swing in rates from year to year. We discussed a draft of this report with senior DOT officials, including the Director, Office of Aviation and International Economics. They agreed with our findings concerning the trends in airfares, service, and safety and stated that the report provides useful information. They also noted that the 112 airports in our sample account for a sizable majority of the nation’s air travelers. These officials commented, however, that the small-community airports in our sample represented the larger “small” airports in the United States and therefore were not completely representative of the nation’s smallest airports. They stated that they have recently completed a study, which they expect to issue soon, on the trends in fares and service at the smallest airports and that the conclusions of their study are consistent with our findings. They noted that although the airports included in their study account for only about 3 percent of the total passenger enplanements in the United States, they believe that it provides a valuable and necessary complement to our report because it focuses on the very smallest airports. We agree that DOT’s study could serve as a valuable complement to our report. As we state in appendix VII, we examined data on the same 112 airports that we examined in our 1990 report in order to provide consistent, comparable information in updating that report. In selecting those airports, one of our criteria was that the airport had to be among the largest 175 in the nation. This criterion was necessary because as an airport’s traffic level falls, the number of tickets from that airport listed in DOT’s O&D Survey also declines. A smaller number of tickets increases the potential for sampling error, leaving the true change in fares uncertain. As a result, we excluded the airports serving the nation’s smallest communities.
Pursuant to a congressional request, GAO examined the deregulation of the airline industry, focusing on: (1) airfares and the quantity, quality, and safety of air service since deregulation. GAO noted that: (1) the average fare per passenger mile is 9 percent lower at small-community airports, 11 percent lower at medium-sized airports, and 8 percent lower at large-community airports; (2) the largest increase in fares occurred in the Southeast and Appalachian regions, and the largest decrease occurred in the West and Southwestern regions; (3) this geographic disparity exists because of the intense competition between low-cost, new carriers in the west and dominant, high-maintenance carriers in the Southeast; (4) the overall quantity of air service at airports has increased, but large communities have experienced the largest increase; (5) air service quality is difficult to measure and depends on the number of destinations served by nonstop flights and one-stop connections, and the type of aircraft used; (6) air service quality since deregulation has been mixed largely due to the airlines hub networks and greater use of turboprop aircraft; and (7) the overall accident rate since deregulation has dropped, but there are no statistically significant differences in air safety trends for any of the airport groups.
In the period following the enactment of legislation establishing Medicare’s OPPS and leading up to the MMA in 2003, concerns were expressed about the adequacy of payments for innovative pharmaceutical products. The MMA addressed these concerns by establishing a payment policy for SCODs. As mandated by the MMA, we conducted a hospital survey and provided HHS with information about prices hospitals paid for SCOD products. Details follow on the background of SCODs, our survey, CMS’s new rates for drug SCODs, and the nature of radiopharmaceutical products. CMS uses OPPS to pay hospitals for services that Medicare beneficiaries receive as part of their treatment in hospital outpatient departments. Under OPPS, Medicare pays hospitals predetermined rates for most services. When OPPS was first developed as required by the Balanced Budget Act of 1997, the rates for hospital outpatient services, drugs, and radiopharmaceuticals were based on hospitals’ 1996 median costs. However, these rates prompted concerns that payments to hospitals would not reflect the costs of newly introduced pharmaceutical products used to treat, for example, cancer, rare blood disorders, and other serious conditions. In turn, congressional concerns were raised that beneficiaries might lose access to some of these products if hospitals avoided providing them because of a perceived shortfall in payments. In response to these concerns, the Medicare, Medicaid, and SCHIP Balanced Budget Refinement Act of 1999 authorized pass-through payments, which were a way to temporarily augment the OPPS payments for newly introduced pharmaceutical products first used after 1996. The MMA modified this payment method for some of these pharmaceutical products. As part of the modification, the MMA defined the new SCOD payment category, which includes many of these newly introduced pharmaceutical products. The MMA requires that SCODs be placed in separate payment categories— that is, not packaged with related services. As directed by the MMA, we conducted a survey of a large sample of hospitals to determine their acquisition costs for SCOD products. We surveyed 1,400 hospitals and received usable data from 83 percent of the hospitals for drug SCODs and from 61 percent of the 1,322 hospitals that had submitted Medicare claims for radiopharmaceutical SCODs in the first 6 months of 2003. We found that we could not obtain data that would permit calculation of hospitals’ acquisition costs, because, in general, hospitals were unable to report accurately or comprehensively on rebates. Consequently, we reported average purchase prices for drug and radiopharmaceutical SCODs, which are prices net of discounts but not rebates. Of the 251 SCODs that we identified, we reported average purchase prices for the 62 SCODs that accounted for 95 percent of Medicare spending on all SCODs in the first 9 months of 2004. (These prices and related information are included as app. II and app. III.) ASP is a price measure established in the MMA to provide a basis for payment rates for physician-administered drugs and now used by CMS in setting rates for drug SCODs. CMS instructs pharmaceutical manufacturers to report ASP data to CMS within 30 days after the end of each quarter. The MMA defined ASP as the average sales price for all U.S. purchasers of a drug, net of volume, prompt pay, and cash discounts; free goods contingent on a purchase requirement; and charge-backs and rebates. Under CMS’s final rule governing 2006 payment rates for hospital outpatient services, including SCOD products, CMS uses manufacturers’ ASPs in setting drug SCOD rates. For radiopharmaceuticals, CMS has set 2006 rates based on an estimate of hospitals’ costs derived from charges, but the agency has not decided how to pay for radiopharmaceutical SCODs after 2006. Hospitals can purchase radiopharmaceuticals, which consist of a radioisotope and a medicine or pharmaceutical agent, in different forms. They can purchase vials of the product in ready-to-use unit doses or in multidoses, or they can purchase a product’s radioactive and nonradioactive components separately and compound them in-house. In a survey conducted by the Society of Nuclear Medicine and the Society of Nuclear Medicine Technologist Section, 76 percent of hospitals reported that they purchased their radiopharmaceuticals in unit doses. Using our hospital survey of prices hospitals paid for SCOD drugs and radiopharmaceuticals, we examined the extent to which prices varied among the approximately 1,200 hospitals that submitted survey data. To do this, we looked at several hospital characteristics, or factors—including teaching status, location, and size of the outpatient department—while controlling for differences in the costliness of the mix of SCODs that hospitals purchased. We analyzed both (1) the separate effect of each factor, controlling for other factors; and (2) the effect of the three factors combined. We found that teaching status had the largest separate effect on drug SCOD prices, whereas location had the largest effect on radiopharmaceutical SCOD prices. Combining the three factors, we found, for example, that large, urban, hospitals with major teaching programs paid lower prices, on average, for drug SCODs—compared with small urban hospitals with other teaching programs. The importance of the three factors in accounting for variation in SCOD prices among hospitals differed by type of product purchased—that is, drug or radiopharmaceutical. A hospital’s teaching status, for example, affected prices paid for drug SCODs but did not matter for the radiopharmaceutical SCOD prices pertaining to unit dose purchases in our survey. In contrast, a hospital’s location was an important factor linked to price differences for radiopharmaceuticals but did not matter with respect to prices for drugs. In addition, hospital size was important in affecting price differences for both drugs and radiopharmaceuticals. (See table 1.) In assessing the magnitude of each factor’s separate effect on prices, we found the following results: Teaching status: Compared with nonteaching hospitals, major teaching hospitals paid prices that were, on average, an estimated 3.2 percent lower for drug SCODs. Teaching status had no independent effect on the prices of radiopharmaceutical SCODs purchased in ready-to-use unit doses. Location: Compared with hospitals located in urban areas, the prices paid by hospitals located in rural areas for radiopharmaceutical SCODs were, on average, an estimated 4.4 percent higher. Size: Compared with smaller hospitals, hospitals with large outpatient departments paid prices, on average, that were an estimated 1.4 percent lower for drugs and 3.1 percent lower for radiopharmaceuticals. Certain circumstances may help explain why each factor had an effect on price. Regarding the effect of teaching status on drug prices, for example, manufacturers may want to influence prescribing patterns of physicians in training and may therefore offer drugs at lower prices to hospitals with teaching programs. As for location’s effect on radiopharmaceutical SCOD prices, industry experts suggested that the short half-life of certain radioactive products could make transporting them to hospitals in rural areas more costly. As for hospital size, hospitals with large outpatient departments may have benefited from volume discounts. To examine the combined effect of the three key factors on prices paid by hospitals, we compared hospitals grouped by one combination—major teaching program, urban location, and large outpatient department—with hospitals grouped by other combinations. Our analysis indicates that large, urban, major teaching hospitals generally paid lower prices, on average, for all SCOD products than did hospitals grouped by other combinations of factors. For example, compared with small urban hospitals with other teaching programs, large major teaching hospitals in urban areas paid prices, on average, that were an estimated 4 percent lower for drugs and 3 percent lower for radiopharmaceuticals. In contrast, compared with small urban hospitals with other teaching programs, small rural hospitals with no teaching programs paid prices, on average, that were about the same for drugs and 4 percent higher for radiopharmaceuticals. Our MMA-mandated survey of hospitals produced accurate hospital price data. However, for CMS to use such a survey to routinely collect data in the future for SCOD rate-setting, the burden could outweigh the benefit. Instead, similar surveys of hospitals could be a useful tool to validate price data obtained from manufacturers, if conducted on an occasional basis. Based on our survey experience, we noted that hospitals as a SCOD data source had one important advantage as well as substantial drawbacks. We found that, as a data source for estimating hospitals’ SCOD acquisition costs, hospitals offered a key advantage: our average purchase prices obtained from hospitals, by definition, represent actual prices paid by hospitals. In this respect, our data differ from other data sources available to CMS—such as suggested list prices, ASPs, and hospitals’ Medicare claims. As a result, none of these alternatives provide, as our survey data do, nationwide data on the actual purchase prices paid by hospitals for drugs and radiopharmaceutical SCODs. However, based on our experience, we found that there would be drawbacks in using hospitals as an annual data source on SCOD prices, owing primarily to the considerable burden created for hospitals as suppliers of data and the considerable costs we incurred as data collectors, signaling the difficulties that CMS would face in implementing similar surveys in the future. Hospitals told us that, to submit the required price data, they had to divert staff from their normal duties, thereby incurring additional staff and contractor costs. The burden was more taxing for some hospitals than for others. Most hospitals had the advantage of relying on price data downloaded from their drug wholesalers’ information systems. A number of hospitals, however, either collected the data manually, provided us with copies of paper invoices, or had automated information systems that were not designed to retrieve the detailed price data needed and required additional data processing effort. Hospitals’ data collection difficulties were particularly pronounced regarding information on manufacturers’ rebates, which affect a drug’s net acquisition cost. Typically, hospitals did not systematically track all manufacturers’ rebates on drug purchases, although nearly 60 percent of hospitals reported receiving one or more rebates. As collectors of data on SCOD prices, we also experienced difficulties obtaining the information needed from hospitals. Hospitals’ information systems were diverse and produced data in many different formats, causing substantial resource and timing difficulties in the data collection process. Specifically, we had to reconfigure data submitted in multiple formats to produce data comparable across hospitals and usable for SCOD rate-setting. This reconfiguration required us to deploy substantial resources and to allow additional time for processing before the data could be made available to CMS. The difficulties we encountered would likely be faced by any organization undertaking a survey using a similar approach. As we previously reported, using SCOD price and related data from drug manufacturers—as CMS is doing in 2006—is a practical strategy for setting Medicare payment rates to hospitals for SCODs. However, our experience obtaining information on actual purchase prices and our observation of the pace of change in the drug marketplace suggest that an occasional survey of hospitals—possibly once or twice in a decade—may be advantageous for validating the accuracy of manufacturers’ price information as a proxy for hospital acquisition cost. Drawing on our experience and using data about sampling variability from our 2004 hospital survey, CMS could design a similar but streamlined hospital survey. Other options available to CMS for validating the accuracy of the price data as a proxy for hospitals’ acquisition costs include audits of manufacturers’ price submissions or an examination of proprietary data the agency considers reliable for validation purposes. Our hospital survey experience not only identified data collection issues associated with hospitals but also underscored accuracy and efficiency concerns in collecting SCOD data from any source. Specifically, the accuracy of the rates Medicare pays for drugs within a SCOD payment category, based on the average price of drugs included in the SCOD, may be compromised if the price of any drug—that is, any national drug code (NDC)—is omitted from the average price of the SCOD category. In the conduct of our 2004 survey, we began with a list, which CMS provided to us, of drug categories that included SCODs as well as other drugs that potentially could be considered SCODs in the future. To ensure the accuracy of our calculation of a hospital’s average purchase price for SCODs, we took additional steps using industry experts and data sources to classify the NDCs and assign them to the appropriate SCOD categories. Since the drug market is dynamic—new drugs enter the market and other drugs drop out in the course of a year—CMS’s list of SCOD drugs and their component NDCs could become out of date unless updated frequently to ensure that all SCOD drugs purchased by hospitals are identified and figured into the calculation of a SCOD’s average price. With regard to efficiency in analyzing our survey results, we concentrated our data processing and statistical resources on the roughly one-quarter of SCODs that account for most of Medicare’s total SCOD spending. In particular, the 62 SCODs for which we produced price estimates accounted for 95 percent of Medicare spending on all 251 SCODs in the first 9 months of 2004. We would not have been able to produce price estimates for all SCODs in time for CMS to take account of our data in setting the 2006 rates. Our experience—especially the amount of time and resources necessary for each step in the data collection and analysis process—could be used by CMS to determine in advance the number of SCODs on which to collect data and estimate prices. There might be some benefit in gathering data and producing price estimates for all SCODs; on the other hand, if resources were limited, CMS might choose to focus on fewer SCODs. CMS will face important challenges in its efforts to collect accurate data for setting SCOD payment rates. In our October 2005 report on CMS’s proposed SCOD rates, we expressed reservations about the ASP data CMS used to set 2006 payment rates for drug SCODs. We cautioned that manufacturers’ reporting of ASPs in summary form—without any further detail—does not provide the agency the information needed to ensure that ASPs are a sufficiently accurate measure of hospitals’ acquisition costs. Data collection and rate-setting for radiopharmaceutical SCODs present unique challenges because of these products’ distinctive characteristics. Under CMS’s current policy, manufacturers are required to report only summary ASP data, limiting CMS’s ability to validate the data’s accuracy. Specifically, manufacturers report ASP as a single price, with no breakdown of price and volume by type of purchaser. CMS instructs manufacturers to average together prices for each drug paid by all U.S. purchasers. However, different purchaser types—for example, hospitals, physicians, and wholesalers—may receive prices that, by purchaser type, are on average higher or lower than one another’s. Because CMS does not receive price data at this level of detail, it cannot determine whether price differences among purchaser types exist. To the extent that nonhospital providers pay different prices than hospitals and account for a proportion of the SCODs purchased, ASP will differ from the prices paid on average by hospitals. CMS has not presented evidence, in its final rule or in discussions with us, that physicians and hospitals pay the same prices. An additional weakness in CMS’s instructions for computing ASPs compounds the challenge of testing the accuracy of the ASPs that manufacturers report. No instruction is provided to manufacturers on the treatment of rebates that apply to several drug products in calculating ASP. This is of particular concern to the extent that manufacturers differ in their rules for calculating these rebates. When a rebate applies to a group of a manufacturer’s products—which may include several SCODs, other pharmaceuticals, and other products—netting out the rebate attributable to a specific SCOD’s purchase is less than straightforward. In the absence of clear and specific instructions, each manufacturer must identify or develop a method for allocating rebates to each of its drug SCOD products. To the extent that manufacturers’ methods differ, they are likely to yield inconsistent results. Moreover, CMS’s final rule does not provide for a follow-up process to check that rebate allocations have been made or have been made appropriately. The complex nature of radiopharmaceuticals as compared with drugs poses special challenges for collecting and interpreting cost data. These challenges include (1) obtaining consistent data for radiopharmaceutical SCODs produced in very different forms and (2) the short half-life for certain products. Moreover, since Medicare spends relatively little on radiopharmaceuticals—less than 1.5 percent of Medicare spending on hospital outpatient services—the challenge is to find a source of data for setting rates that is low cost and reasonably accurate. In our hospital survey, we faced the challenge of uniformly pricing products purchased in very different forms. We focused on prices for radiopharmaceuticals purchased in unit doses. Most of the hospitals purchased radiopharmaceuticals in this ready-to-use form, and only a small fraction of hospitals purchased radiopharmaceuticals in separate components (the radioisotope and the nonradioactive substance), which need to be compounded. We were unable to make prices for separately purchased components comparable to those obtained for unit doses, as the labor costs for compounding the products are included in hospitals’ reported prices of ready-to-use products but not in their reported prices of products they purchased as separate components. The short half-life of certain radiopharmaceutical SCODs can also pose challenges for collecting and interpreting price data. Because the radioactive component decays over time, the amount of the product purchased for a given patient may vary with the distance between where the radiopharmaceutical is compounded and where it is administered. The result is that for those short-lived radiopharmaceuticals paid on a per-dose basis, the cost per dose is more for the doses prepared far from the point of administration than for those prepared closer by, as more of a radioactive product must be purchased to account for its decay in transit. This applies most commonly to F-18 radiopharmaceuticals, the most common of which, F-18 FDG, has a half-life of 1.8 hours. F-18 radiopharmaceuticals, including F-18 FDG, are used in the diagnosis of various diseases, such as cancer, heart disease, and liver disease. Finally, CMS faces the challenge of balancing accuracy and efficiency in obtaining price data on radiopharmaceutical SCODs. Our approach in estimating prices from our survey data was to use only information on unit dose prices, the form purchased by most hospitals. CMS, as stated in the 2006 final rule governing payment rates for SCODs, has not found what it considers a satisfactory method for obtaining data on acquisition costs of radiopharmaceuticals and is continuing to explore both ASP and other alternatives. Hospitals and manufacturers are the most direct source of price data because both are parties to the transactions in which the hospitals acquire the radiopharmaceuticals. In its notice of proposed rulemaking for radiopharmaceutical SCODs, CMS proposed collecting ASPs from manufacturers for use in setting 2007 payment rates. In light of many comments regarding the difficulty of this undertaking, CMS decided not to collect radiopharmaceutical ASPs for 2007 rates, but left open the possibility of using ASP in the future. CMS has also discussed the possibility of using charges from hospitals’ Medicare claims to approximate acquisition costs for radiopharmaceutical SCODs, rather than obtaining price data from invoices provided by hospitals or from manufacturers. Using claims data may be a more efficient but less accurate means of obtaining price estimates than obtaining price data directly from manufacturers or from hospitals’ invoices. In its final rule, CMS stated that it was basing 2006 payments on hospitals’ charges (derived from outpatient claims) for radiopharmaceuticals. CMS plans to adjust these charges to reflect costs and noted that it did not plan to use this methodology permanently. For rate-setting after 2006, CMS also noted the possibility of using invoice data submitted to Medicare by physicians who administer radiopharmaceuticals in their offices. In its final rule, CMS did not present evidence that hospitals and physicians pay similar prices for these radiopharmaceutical drugs nor, if these prices differ, whether using these physician data would be appropriate for use in setting hospital outpatient rates. Basing Medicare’s payment rates for hospitals’ SCOD purchases on current, accurate price data is important both to ensuring that Medicare pays appropriately—neither too much nor too little—and to ensuring beneficiary access to these innovative pharmaceutical products. As we previously reported, we agree with CMS that ASP is a practical data source for setting and updating rates for drug SCODs on a routine basis. However, we remain concerned about whether CMS can determine that ASP accurately represents purchases made by hospitals and believe that CMS should implement our October 2005 recommendation to collect sufficient information on ASP to make such a determination. We are also concerned about the likelihood that ASPs are not calculated consistently across all manufacturers, owing to CMS’s lack of detailed instructions. As for validating the data CMS collects to set payment rates equal to hospitals’ acquisition costs, an examination of hospitals’ actual purchase prices, by definition, is optimal for assessing accuracy. Recognizing the operational difficulties of a hospital survey and using the knowledge gained from our survey, CMS could conduct a similar but streamlined hospital survey, possibly once or twice in a decade. Other options available to CMS for validating price data could include audits of manufacturers’ price submissions or an examination of proprietary data the agency considers reliable for validation purposes. In contrast, we found that the diversity of forms in which radiopharmaceutical SCODs can be purchased—ready-to-use unit doses, multidoses, or separately purchased radioactive and nonradioactive components—complicates CMS’s efforts to select a data source that can provide reasonably accurate price data efficiently. Our experience suggests that the best option available to CMS, in terms of accuracy and efficiency, is to collect price data on radiopharmaceuticals purchased in ready-to-use unit doses, the form in which an estimated three-quarters of hospitals purchase these products. To ensure that Medicare payments for SCOD products are based on sufficiently accurate data, we recommend that the Secretary of Health and Human Services take the following two actions: validate, on an occasional basis, manufacturers’ reported drug ASPs as a measure of hospitals’ acquisition costs using a survey of hospitals or other method that CMS determines to be similarly accurate and efficient; and use unit-dose prices paid by hospitals when available as the data source for setting and updating Medicare payment rates for radiopharmaceutical SCODs. We received written comments on a draft of this report from HHS (see app. IV), which noted that it had considered information from our survey of hospitals in developing 2006 hospital outpatient payment policy and expressed appreciation for our effort and analysis. Regarding the first recommendation—that HHS validate ASPs as a measure of hospital acquisition costs through occasional hospital surveys or other methods—HHS highlighted our finding that an annual hospital survey could place considerable burdens on both the agency and hospital staff. However, HHS agreed to consider this recommendation, saying that it would continue to analyze the best approach for setting payment rates for drugs and radiopharmaceutical SCODs in view of our recommendation. It will also continue to analyze the adequacy of paying for drugs at ASP+6 percent in the light of claims data, which persuaded HHS that for 2006 ASP + 6 percent was the best available proxy for hospital acquisition and handling costs. Regarding the second recommendation—that HHS use unit-dose prices to set and update payment rates for radiopharmaceuticals—HHS agreed with us that the multiple forms in which radiopharmaceuticals can be purchased makes setting their payment rates difficult. While agreeing to consider our recommendation, HHS expressed several reservations. First, it noted that we had not specified whether the survey to collect acquisition cost data should be a survey of hospitals or manufacturers and asked that we clarify this point. Second, it noted that we had emphasized the burden of annual surveys of hospital drug prices and expressed the concern that an annual survey of hospital radiopharmaceutical prices would be equally burdensome. Finally, HHS noted that we had confined our report to 9 of the approximately 55 radiopharmaceuticals that are paid separately, and questioned whether unit-dose data would be available for all or most radiopharmaceuticals. Our recommendation that HHS validate ASPs through occasional surveys or by using other methods is based in considerable part on our experience of the difficulty of a hospital survey. The burden that annual surveys would place on both hospitals and the agency is the reason that we rejected annual surveys as a source of acquisition cost data and instead proposed only occasional surveys to validate ASPs. Furthermore, as we noted in the recommendation, HHS could use a method other than a survey if that method were similarly accurate and efficient. In our recommendation on radiopharmaceuticals, we did not comment on whether the survey to collect acquisition cost data should be a survey of hospitals or manufacturers, because we have not analyzed the feasibility of obtaining these data from manufacturers. We recognize the potential burden of hospital surveys; this burden would need to be taken into account in weighing the merits of a hospital survey versus other alternatives. Regarding our recommendation to collect unit-dose prices, we have clarified it, saying that unit-dose prices should be used when available. In our survey, we used unit-dose data when we reported purchase prices for the 9 radiopharmaceuticals that accounted for 90 percent of Medicare’s costs for hospital outpatient drugs. For radiopharmaceuticals that are prepared exclusively in-house HHS could, if necessary, establish an alternative method for determining payment rates. We are sending copies of this report to the Secretary of Health and Human Services, the Administrator of the Centers for Medicare & Medicaid Services, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7119 or at steinwalda@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This appendix describes the data and methods we used to examine SCOD price variation among hospitals purchasing SCOD products. In particular, we describe (1) the SCOD price data we analyzed, (2) the factors potentially affecting SCOD prices and the measurement of these factors, and (3) the methods underlying the statistical analysis of prices we conducted and the statistical results we obtained. Drawing on data from our survey of 1,157 hospitals, we examined hospitals’ purchase prices for 53 drug SCODs and 9 radiopharmaceutical SCODs for the period July 1, 2003, through June 30, 2004. Combined, these 62 SCOD categories represented 95 percent of Medicare spending on SCOD products during the first 9 months of 2004. We analyzed invoice data that hospitals submitted to us; specifically, our analysis included one SCOD price for each SCOD purchase listed on an invoice. As a result, for a hospital that purchased SCODs and other drugs once a month, our analysis included 1 price for each month’s purchase of a particular SCOD or a total of up to 12 invoice prices for that SCOD during the 12-month period. We were advised in our analysis by an expert panel consisting of Joseph P. Newhouse, John D. MacArthur Professor of Health Policy and Management, Harvard University; Robert A. Berenson, Senior Fellow, Urban Institute; Ernst R. Berndt, Professor of Applied Economics, Sloan School of Management, Massachusetts Institute of Technology; Andrea G. Hershey, Clinical Coordinator and Pharmacy Residency Program Director, Union Memorial Hospital (Baltimore, Md.); and Richard L. Valliant, Senior Research Scientist, University of Michigan. To analyze SCOD price variation among hospitals purchasing SCODs, we identified characteristics of hospitals that could plausibly explain why prices vary: teaching status, location, and size. We also identified a fourth factor: differences in the costliness of the mix of SCODs that hospitals purchased. Table 2 lists these factors and describes operational measures of these factors and the sources of data used to calculate these measures. In addition to the measures listed in table 2, we considered alternative measures for location and for size: We examined two geographic classification systems as alternatives to an MSA (metropolitan statistical area)/nonMSA classification: (1) urban influence codes, which classify counties based on each county’s largest city and its proximity to other areas with large, urban, populations; and (2) rural-urban continuum codes, which classify metropolitan counties (that is, those in an MSA) by the size of the urban area and classify nonmetropolitan counties by the size of the urban population and proximity to a metropolitan area. Before selecting our preferred measure of hospital size (hospital outpatient charges at the 80th percentile or higher, where hospitals were ranked by their outpatient Medicare charges), we considered other measures of hospital size: the number of hospital beds, the number of unique SCODs purchased by a hospital, and the number of hospital outpatient visits. In assessing our regression results for each of the several measures of location and size that we considered, we took into account statistical criteria including the statistical significance of each measure and the overall explanatory power of each model. We also considered qualitative factors when selecting our preferred measures of location and size. For example, we selected hospital outpatient charges as our measure of size, instead of number of hospital beds, because both measures had similar statistical properties and our analysis focuses on the hospital outpatient setting. In addition to conducting separate regression analyses of the price data for drug SCODs and for radiopharmaceutical SCODs, we analyzed price variation separately for each of four therapeutic categories of drug SCODs. We also conducted separate regression analyses of SCOD price variation for drugs without biologicals, for biologicals, and for radiopharmaceuticals. We determined that any gains in statistical properties did not outweigh the greater complexity of these analyses. In analyzing SCOD price variation, our dependent variable was the natural logarithm of SCOD price. SCOD prices are not distributed symmetrically around the average. SCOD prices are skewed to the right and are not distributed normally, reflecting some SCODs with particularly high prices. Taking the natural logarithm of price is intended to take skewness into account and make the resulting distribution consistent with the statistical assumptions of a regression. We weighted prices paid by hospitals for individual drugs and radiopharmaceuticals by the purchase amount of each invoice. That is, we weighted prices more heavily in the statistical analysis for invoices that represented a larger proportion of total annual purchases of a particular SCOD than for invoices that represented a smaller proportion of purchases. In addition, our analysis took into account the fact that multiple prices paid by a particular hospital were not necessarily statistically independent of each other—a phenomenon known as clustering. In estimating our statistical models, we corrected the potential bias in our estimates due to clustering by using the robust and cluster options in STATA, a statistical software package. To gauge the effects of our explanatory factors on price variation among hospitals, we estimated one regression model for drug SCODs and a separate model for radiopharmaceutical SCODs. Table 3 shows estimates of the first model, which indicate the effects of three hospital characteristics on the natural logarithm of price of drug SCODs. To examine the separate effect of each factor, holding constant the effects of the remaining factors, we referred to the estimated coefficients for each factor in the model. From the estimated coefficient, we calculated the percentage difference in price attributable to each factor. For example, major teaching hospitals paid lower prices for drugs compared to nonteaching hospitals: major teaching hospitals paid 3.2 percent less than nonteaching hospitals, holding constant location, size, and the mix of SCODs purchased. In contrast, we found no statistically significant difference in prices paid by hospitals with other teaching programs and those paid by nonteaching hospitals, holding the other factors constant. Although the R-squared statistic in table 3 indicates that the model accounts for over 99 percent of the variation in the logarithm of the SCOD price, this feature of the estimated model requires careful interpretation. Most of the variation in the logarithm of the drug SCOD price was due to the particular SCODs that were purchased—for some, hospitals paid on average about $300 per unit while for others, hospitals paid about $3 per unit. Consequently, after accounting for differences in the mix of SCODs purchased by different hospitals, only a small amount of variation in price remains to be explained by other factors. As a result, the R-squared for this model should not be interpreted as an indicator of the three factors’ success in explaining SCOD price variation. Instead, the t-statistics associated with teaching status, location, and size are more useful, since they signal these factors’ statistical significance—that is, whether the difference between the estimated effect of each factor and zero is statistically significant. Table 4 presents the results for the second model, which estimates the effects of the three factors on the prices of radiopharmaceutical SCODs. As table 4 shows, two factors—location and size—are statistically significant in the model examining radiopharmaceutical SCOD prices. Other things equal, a rural hospital paid prices for radiopharmaceutical SCODs that were an estimated 4.4 percent higher than urban hospitals, while large hospitals paid prices an estimated 3.1 percent lower than small hospitals. To examine the effect of the three factors combined, while controlling for differences in the costliness of SCODs that hospitals purchased, we used the estimates from two models—one for drug SCODs and one for radiopharmaceutical SCODs—to simulate the prices that certain groups of hospitals paid. In particular, we focused on comparing the prices paid by hospitals with one combination of characteristics—major teaching, urban, and large—with the prices paid by hospitals with a different combination of characteristics—nonteaching, rural, and small. Table 5 appears as table 1 in our report Medicare: Drug Purchase Prices for CMS Consideration in Hospital Outpatient Rate-Setting, GAO-05- 581R (Washington, D.C.: June 30, 2005). The label of the second column— HCPCS code—refers to the Healthcare Common Procedure Coding System, which CMS uses to define SCODs. Injection, Epoetin Alpha (for non-ESRD use), per 1,000 units Injection, Pegfilgrastim, 6 mg Injection, Immune Globulin, Intravenous, Lyophilized, 1 g Injection, Immune Globulin, Intravenous, Non- Lyophilized, 1 g Injection, Infliximab, 10 mg Injection, Darbepoetin alfa, 1 mcg (non-ESRD use) Injection, Oxaliplatin, per 5 mg Injection, Zoledronic Acid, 1 mg Gemcitabine Hcl, 200 mg Injection, Nesiritide, 0.25 mg Leuprolide Acetate (for depot suspension), 7.5 mg Injection, Alpha 1 - Proteinase Inhibitor - Human, 10 mg Injection, Bevacizumab, 10 mg Injection, Filgrastim (G-CSF), 480 mcg Injection, Leuprolide Acetate (for depot suspension), per 3.75 mg Doxorubicin Hydrochloride, all lipid formulations, 10 mg Injection, Octreotide, depot form for intramuscular injection, 1 mg Injection, Cetuximab, 10 mg Injection, Bortezomib, 0.1 mg Injection, Filgrastim (G-CSF), 300 mcg 956 sales price) ($) price ($) 95% confidence interval of the average purchase price ($) Median purchase price ($) Medicare spending on SCOD, 2004($ in millions) Injection, Imiglucerase, per unit Injection, Verteporfin, 0.1 mg Goserelin Acetate Implant, per 3.6 mg Injection, Granisetron Hydrochloride, 100 mcg Botulinim Toxin Type A, per unit Injection, Amifostine, 500 mg Injection, Pamidronate Disodium, per 30 mg Vinorelbine Tartrate, per 10 mg Injection, Reteplase, 18.1 mg Injection, Mitoxantrone Hydrochloride, per 5 mg Fludarabine Phosphate, 50 mg Apligraf®, per 44 square centimeters Injection, Fulvestrant, 25 mg Injection, Tenecteplase, 50 mg Injection, Pemetrexed, 10 mg Denileukin Diftitox, 300 mcg Injection, Agalsidase Beta, 1 mg Granisetron Hydrochloride, 1 mg, oralInjection, Palonosetron Hcl, 25 mcg Injection, Immune Globulin, Intravenous, Lyophilized, 10 mg Injection, Immune Globulin, Intravenous, Non- Lyophilized, 10 mg Factor VIII (Antihemophilic Factor, Human) per I.U. Injection, Abciximab, 10 mg Injection, Cytomegalovirus Immune Globulin Intravenous (Human), per vial Injection, Eptifibatide, 5 mg Interferon, Alfa-2B, Recombinant, 1 million units Dermagraft®, per 37.5 square centimeters 2 sales price) price ($) 95% confidence interval of the average purchase price ($) Median purchase price ($) 95% confidence interval of the median purchase price($) 545.10 This estimate of the total number of hospitals in the population is based on our sample. This is the payment rate specified for each HCPCS for 2005. It incorporates CMS’s April 2005 update. CMS publishes the ASP plus 6 percent for certain drugs used in physicians’ offices. These amounts are based on data provided by manufacturers each quarter. We are reporting ASPs for the quarter beginning in April 2005. ASPs reported here do not include the 6 percent added by CMS. For HCPCS codes that contain only one National Drug Code (NDC), we do not include information on the average or median purchase price because of the potential proprietary sensitivity of such information. On April 1, 2005, CMS replaced J1563, Injection, Immune Globulin, Intravenous, 1g, with two new codes: Q9941 and Q9943. J1563 was ranked fourth in total Medicare spending on SCODs from January 1, 2004, to September 30, 2004. J1563, Injection, Immune Globulin, Intravenous, 1g, accounted for $127.1 million in Medicare spending from January 1, 2004, through September 30, 2004, which was 6.4 percent of total Medicare spending on SCODs for that time period. On April 1, 2005, CMS replaced J1563, Injection, Immune Globulin, Intravenous, 1g, with two new codes: Q9941 and Q9943. Because J1563 was replaced by two codes, we could not estimate the total number of hospitals in the population for these new codes individually. On January 1, 2005, CMS replaced C9214, C9215, C9207, C9213, C9208, and C9210 with J9035, J9055, J9041, J9305, J0180, and J2469, respectively. The ranks for the new codes correspond to the ranks in total Medicare spending on SCODs from January 1, 2004, to September 30, 2004, for the former codes. On April 1, 2005, CMS replaced J1564, Injection, Immune Globulin, Intravenous, 10 mg, with two new codes: Q9942 and Q9944. J1564 was ranked 47th in total Medicare spending on SCODs from January 1, 2004, to September 30, 2004. J1564, Injection, Immune Globulin, Intravenous, 10 mg accounted for $4.4 million in Medicare spending from January 1, 2004, through September 30, 2004, which was 0.2 percent of total Medicare spending on SCODs for that time period. On April 1, 2005, CMS replaced J1564, Injection, Immune Globulin, Intravenous, 10 mg, with two new codes: Q9942 and Q9944. Because J1564 was replaced by two codes, we could not estimate the total number of hospitals in the population for these new codes individually. For this SCOD, our sample data cannot be extrapolated to compute a confidence interval for the median. Table 6 appears as table 1 in our report Medicare: Radiopharmaceutical Purchase Prices for CMS Consideration in Hospital Outpatient Rate- Setting, GAO-05-733R (Washington, D.C.: July 14, 2005). The label of the second column—HCPCS code—refers to the Healthcare Common Procedure Coding System, which CMS uses to define SCODs. Technetium Tc 99m Tetrofosmin, per dose Fluorodeoxyglucose (FDG) F18, per dose (4-40 mCi/ml) CMS payment rate for 2005($) price($) 95% confidence interval of the average purchase price($) Median purchase price($) 95% confidence interval of the median purchase price($) This estimate of the total number of hospitals in the population is based on our sample. Phyllis Thorburn, Assistant Director; Hannah Fein; Dae Park; Jonathan Ratner; and Thomas Walke made key contributions to this report.
In 2003, the Medicare Modernization Act required the Centers for Medicare & Medicaid Services (CMS) to establish payment rates for a set of new pharmaceutical products--drugs and radiopharmaceuticals--provided to beneficiaries in a hospital outpatient setting. These products were classified for payment purposes as specified covered outpatient drugs (SCOD). The legislation directed CMS to set 2006 Medicare payment rates for SCODs equal to hospitals' average acquisition costs and included requirements for GAO. As directed, GAO surveyed hospitals and issued two reports, providing information to use in setting 2006 SCOD rates. To address other requirements in the law, this report analyzes SCOD price variation across hospitals, advises CMS on future surveys it might undertake, and examines both lessons from the GAO survey and future challenges facing CMS. Analyzing pharmaceutical price data collected from its 2004 survey of hospitals, GAO found that prices hospitals paid for SCOD products varied across hospitals. Certain factors--namely, whether the hospital had a major teaching program or not, was in an urban or rural area, and had a large or small hospital outpatient department--were associated with whether hospitals paid higher or lower prices for SCOD products. Major teaching hospitals paid prices that were an estimated 3.2 percent lower than those paid by nonteaching hospitals for drug SCODs; rural hospitals paid prices an estimated 4.4 percent higher than those paid by urban hospitals for radiopharmaceutical SCODs; and large hospitals paid prices an estimated 1.4 percent lower than those paid by small hospitals for drug SCODs and 3.1 percent lower for radiopharmaceutical SCODs. Combining these factors, GAO found that large, urban, major teaching hospitals--compared with other hospitals--generally paid lower prices, on average, for all SCOD products. From conducting its hospital survey, GAO learned a key lesson that CMS could use in the future: such a survey would not be practical for collecting the data needed to set and update SCOD rates routinely but would be useful for validating, on occasion, CMS's rate-setting data. GAO's survey produced accurate hospital drug price data, but it also created a considerable burden for hospitals as the data suppliers and considerable costs for GAO as the data collector. Nonetheless, the benefit of collecting actual prices paid by hospitals could make such surveys advantageous for occasionally validating CMS's proxy for SCODs' average acquisition costs--the average sales price (ASP) data that manufacturers report. CMS will face important challenges as it seeks to obtain accurate data on hospitals' acquisition costs for drug and radiopharmaceutical SCODs. Regarding drugs, CMS lacks the detail on manufacturers' ASP data needed to determine if rates developed from these data are appropriate for hospitals. Manufacturers report ASP as a single price paid by all purchasers, making it impossible to distinguish the price paid by hospitals alone. CMS instructs manufacturers to report ASP net of rebates but does not specify how to allocate individual product rebates when several products are purchased. Regarding radiopharmaceuticals, GAO found that the diversity of forms in which they can be purchased--ready-to-use unit doses, multidoses, or separately purchased radioactive and non-radioactive substances--complicates CMS's efforts to select a data source that can provide reasonably accurate price data efficiently. Efficiency as well as accuracy is a factor in selecting a data source because radiopharmaceuticals account for only 1.5 percent of Medicare hospital outpatient spending. GAO's experience suggests that the best option available to CMS, in terms of accuracy and efficiency, is to collect price data on radiopharmaceuticals purchased in ready-to-use unit doses, the form in which an estimated three-quarters of hospitals purchase these products.
In August 1990, Iraq invaded Kuwait, and the United Nations imposed sanctions against Iraq. Security Council Resolution 661 of 1990 prohibited all nations from buying and selling Iraqi commodities, except for food and medicine. Security Council Resolution 661 also prohibited all nations from exporting weapons or military equipment to Iraq and established a sanctions committee to monitor compliance and progress in implementing the sanctions. The members of the sanctions committee were members of the Security Council. Subsequent Security Council resolutions specifically prohibited nations from exporting to Iraq items that could be used to build chemical, biological, or nuclear weapons. In 1991, the Security Council offered to let Iraq sell oil under a U.N. program to meet its peoples’ basic needs. The Iraqi government rejected the offer, and over the next 5 years, the United Nations reported food shortages and a general deterioration in social services. In December 1996, the United Nations and Iraq agreed on the Oil for Food program, which permitted Iraq to sell up to $1 billion worth of oil every 90 days to pay for food, medicine, and humanitarian goods. Subsequent U.N. resolutions increased the amount of oil that could be sold and expanded the humanitarian goods that could be imported. In 1999, the Security Council removed all restrictions on the amount of oil Iraq could sell to purchase civilian goods. The United Nations and the Security Council monitored and screened contracts that the Iraqi government signed with commodity suppliers and oil purchasers, and Iraq’s oil revenue was placed in a U.N.-controlled escrow account. In May 2003, U.N. resolution 1483 requested the U.N. Secretary General to transfer the Oil for Food program to the CPA by November 2003. Despite concerns that sanctions may have worsened the humanitarian situation, the Oil for Food program appears to have helped the Iraqi people. According to the United Nations, the average daily food intake increased from around 1,275 calories per person per day in 1996 to about 2,229 calories at the end of 2001. In February 2002, the United Nations reported that the Oil for Food program had considerable success in several sectors such as agriculture, food, health, and nutrition by arresting the decline in living conditions and improving the nutritional status of the average Iraqi citizen. The Public Distribution System run by Iraq’s Ministry of Trade is the food portion of the Oil for Food program. The system distributes a monthly “food basket” that normally consists of a dozen items to all Iraqis. About 60 percent of Iraqis rely on this basket as their main source of food. We estimate that, from 1997 through 2002, the former Iraqi regime acquired $10.1 billion in illegal revenues related to the Oil for Food program—$5.7 billion through oil smuggling and $4.4 billion through surcharges against oil sales and illicit commissions from commodity suppliers. This estimate is higher than the $6.6 billion in illegal revenues we reported in May 2002. We updated our estimate to include (1) oil revenue and contract amounts for 2002, (2) updated letters of credit from prior years, and (3) newer estimates of illicit commissions from commodity suppliers. Oil was smuggled out through several routes, according to U.S. government officials and oil industry experts. Oil entered Syria by pipeline, crossed the borders of Jordan and Turkey by truck, and was smuggled through the Persian Gulf by ship. In addition to revenues from oil smuggling, the Iraqi government levied surcharges against oil purchasers and commissions against commodity suppliers participating in the Oil for Food program. According to some Security Council members, the surcharge was up to 50 cents per barrel of oil and the commission was 5 to 15 percent of the commodity contract. In our 2002 report, we estimated that the Iraqi regime received a 5-percent illicit commission on commodity contracts. However, a September 2003 Department of Defense review found that at least 48 percent of 759 Oil for Food contracts that it reviewed were overpriced by an average of 21 percent. Defense officials found 5 contracts that included “after-sales service charges” of between 10 and 20 percent. In addition, interviews by U.S. investigators with high-ranking Iraq regime officials, including the former oil and finance ministers, confirmed that the former regime received a 10-percent commission from commodity suppliers. Both OIP and the sanctions committee were responsible for overseeing the Oil for Food Program. However, the Iraqi government negotiated contracts directly with purchasers of Iraqi oil and suppliers of commodities. While OIP was to examine each contract for price and value, it is unclear how it performed this function. The sanctions committee was responsible for monitoring oil smuggling, screening contracts for items that could have military uses, and approving oil and commodity contracts. The sanctions committee responded to illegal surcharges on oil, but it is unclear what actions it took to respond to commissions on commodity contracts. U.N. Security Council resolutions and procedures recognized the sovereignty of Iraq and gave the Iraqi government authority to negotiate contracts and decide on contractors. Security Council resolution 986 of 1995 authorized states to import petroleum products from Iraq, subject to the Iraqi government’s endorsement of transactions. Resolution 986 also stated that each export of goods would be at the request of the government of Iraq. Security Council procedures for implementing resolution 986 further stated that the Iraqi government or the United Nations Inter-Agency Humanitarian Program would contract directly with suppliers and conclude the appropriate contractual arrangements. Iraqi control over contract negotiations may have been one important factor in allowing Iraq to levy illegal surcharges and commissions. Appendix I contains a chronology of major events related to sanctions against Iraq and the administration of the Oil for Food program. OIP administered the Oil for Food program from December 1996 to November 2003. As provided in Security Council resolution 986 of 1995 and a memorandum of understanding between the United Nations and the Iraqi government, OIP was responsible for monitoring the sale of Iraq’s oil, monitoring Iraq’s purchase of commodities and the delivery of goods, and accounting for the program’s finances. The United Nations received 3 percent of Iraq’s oil export proceeds for its administrative and operational costs, which included the cost of U.N. weapons inspections. The sanctions committee’s procedures for implementing resolution 986 stated that U.N. independent inspection agents were responsible for monitoring the quality and quantity of oil being shipped and were authorized to stop shipments if they found irregularities. To do this, OIP employed 14 contract workers to monitor Iraqi oil sales at 3 exit points in Iraq. However, the Iraqi government bypassed the official exit points by smuggling oil through an illegal Syrian pipeline and by trucks through Jordan and Turkey. According to OIP, member states were responsible for ensuring that their nationals and corporations complied with the sanctions. OIP was also responsible for monitoring Iraq’s purchase of commodities and the delivery of goods. Security Council Resolution 986, paragraph 8a(ii) required Iraq to submit a plan, approved by the Secretary General, to ensure equitable distribution of Iraq’s commodity purchases. The initial distribution plans focused on food and medicines while subsequent plans were expansive and covered 24 economic sectors, including electricity, oil, and telecommunications. The sanction committee’s procedures for implementing Security Council resolution 986 stated that experts in the Secretariat were to examine each proposed Iraqi commodity contract, in particular the details of price and value, and to determine whether the contract items were on the distribution plan. It is unclear whether the office performed this function. OIP officials told the Defense Contract Audit Agency they performed very limited, if any, pricing review. They stated that no U.N. resolution tasked them with assessing the price reasonableness of the contracts and no contracts were rejected solely on the basis of price. The sanction committee’s procedures for implementing resolution 986 state that independent inspection agents will confirm the arrival of supplies in Iraq. OIP deployed about 78 U.N. contract monitors to verify shipments and authenticate the supplies for payment. OIP employees were able to visually inspect 7 to 10 percent of the approved deliveries. Security Council resolution 986 also requested the Secretary General to establish an escrow account for the Oil for Food Program, and to appoint independent and certified public accountants to audit the account. In this regard, the Secretary General established an escrow account at BNP Paribas into which Iraqi oil revenues were deposited and letters of credit were issued to suppliers having approved contracts. The U.N. Board of Audit, a body of external public auditors, audited the account. According to OIP, there were also numerous internal audits of the program. We are trying to obtain these audits. The sanctions committee was responsible for three key elements of the Oil for Food Program: (1) monitoring implementation of the sanctions, (2) screening contracts to prevent the purchase of items that could have military uses, and (3) approving Iraq’s oil and commodity contracts. U.N. Security Council resolution 661 of 1990 directs all states to prevent Iraq from exporting petroleum products into their territories. Paragraph 6 of Resolution 661 establishes a sanctions committee to report to the Security Council on states’ compliance with the sanctions and recommend actions regarding effective implementation. As early as June 1996, the Maritime Interception Force, a naval force of coalition partners including the United States and Great Britain, informed the sanctions committee that oil was being smuggled out of Iraq through Iranian territorial waters. In December 1996, Iran acknowledged the smuggling and reported that it had taken action. In October 1997, the sanctions committee was again informed about smuggling through Iranian waters. According to multiple sources, oil smuggling also occurred through Jordan, Turkey, Syria, and the Gulf. Smuggling was a major source of illicit revenue for the former Iraqi regime through 2002. It is unclear what recommended actions the sanctions committee made to the Security Council to address the continued smuggling. A primary function of the members of the sanctions committee was to review and approve contracts for items that could be used for military purposes. For example, the United States conducted the most thorough review; about 60 U.S. government technical experts assessed each item in a contract to determine its potential military application. According to U.N. Secretariat data in 2002, the United States was responsible for about 90 percent of the holds placed on goods to be exported to Iraq. As of April 2002, about $5.1 billion of worth of goods were being held for shipment to Iraq. Under Security Council resolution 986 of 1995, paragraphs 1 and 8, the sanctions committee was responsible for approving Iraq’s oil contracts, particularly to ensure that the contract price is fair, and for approving most of Iraq’s commodity contracts. In March 2001, the United States informed the Security Council about allegations that Iraqi government officials were receiving illegal surcharges on oil contracts and illicit commissions on commodity contracts. According to OIP officials, the Security Council took action on the allegations of surcharges in 2001 by implementing retroactive pricing for oil contracts. However, it is unclear what actions the sanctions committee took to respond to illicit commissions on commodity contracts. At that time, there was increasing concern about the humanitarian situation in Iraq and pressure on the United States to expedite its review process. In November 2003, the United Nations transferred to the CPA responsibility for 3,059 Oil for Food contracts totaling about $6.2 billion and decided not to transfer a remaining 2,199 contracts for a variety of reasons. U.N. agencies had renegotiated most of the contracts turned over to the CPA with the suppliers to remove illicit charges and amend delivery and location terms. However, the information the United Nations supplied to the CPA on the renegotiated contracts contained database errors and did not include all contracts, amendments, and letters of credit associated with the 3,000 contracts. These data problems, coupled with inadequate staffing at the CPA, hampered the ability of the CPA’s Oil for Food coordination center to ensure that suppliers complied with commodity deliveries. In addition, poor planning and coordination are affecting the execution of food contracts. On November 22, 2003, OIP transferred 3,059 contracts worth about $6.2 billion in pending commodity shipments to the CPA, according to OIP. Prior to the transfer, U.N. agencies had renegotiated the contracts with the suppliers to remove “after-sales service fees”—based on information provided by the CPA and Iraqi ministries—and to change delivery dates and locations. These fees were either calculated separately or were part of the unit price of the goods. At the time of the transfer, all but 251 contracts had been renegotiated with the suppliers. The Defense Contract Management Agency is renegotiating the remaining contracts for the CPA to remove additional fees averaging 10 percent. The criteria for renegotiating contracts and the amount of the reductions were based on information from the CPA in Baghdad and the ministries that originally negotiated the contracts. An additional 2,199 contracts worth almost $2 billion were not transferred as a result of a review by U.N. agencies, the CPA, and the Iraqi ministries that negotiated the contracts. For example: The review did not recommend continuing 762 contracts, worth almost $1.2 billion, because it determined that the commodities associated with the contracts were no longer needed. Another 728 contracts, worth about $750 million, had been classified as priority contracts, but were not transferred to the CPA for several reasons. About half—351 contracts—were not transferred because suppliers were concerned about the adequacy of security within Iraq or could not reach agreement on price reductions or specification changes. Another 180 contracts were considered fully delivered. Another 136 suppliers had either declared bankruptcy, did not exist, or did not respond to U.N. requests. It is unclear why the remaining 61 contracts were removed from the priority list; the OIP document lists them as “other.” Suppliers did not want to ship the outstanding small balances for an additional 709 contracts totaling about $28 million. The largest portion of the $6.2 billion in Oil for Food contracts pending shipment in November 2003—about 23 percent—was designated for food procurement. An additional 9 percent was for food handling and transport. The oil infrastructure, power, and agriculture sectors also benefited from the remaining contracts. Nearly one half of the renegotiated contracts were with suppliers in Russia, Jordan, Turkey, the United Arab Emirates, and France. According to CPA officials and documents, the incomplete and unreliable contract information the CPA received from the United Nations has hindered CPA’s ability to execute and accurately report on the remaining contracts. U.N. resolution 1483 requested the Secretary General, through OIP, to transfer to the CPA all relevant documentation on Oil for Food contracts. When we met with OIP officials on November 24, 2003, they stated that they had transferred all contract information to the CPA. CPA officials and documents report that the CPA has not received complete information, including copies of all contracts. The CPA received several compact disks in November and January that were to contain detailed contract and delivery data, but the information was incomplete. The CPA received few source documents such as the original contracts, amendments, and letters of credit needed to identify the status of commodities, prepare shipment schedules, and contact suppliers. In addition, the CPA received little information on letters of credit that had expired or were cancelled. Funds for the Oil for Food program are obligated by letters of credit to the bank holding the U.N. escrow account. When these commitments are cancelled, the remaining funds are available for transfer to the Development Fund for Iraq. Without this information, the CPA cannot determine the disposition of Oil for Food funds and whether the proper amounts were deposited into the Development Fund for Iraq. In addition, the CPA received an OIP contract database but found it unreliable. For example, CPA staff found mathematical and currency errors in the calculation of contract cost. The inadequate data and documentation have made it difficult for CPA to prepare accurate reports on the status of inbound goods and closeouts of completed contracts. According to a Department of Defense contracting official, some contractors have not received payment for goods delivered in Iraq because the CPA had no record of their contracts. In November 2003, the CPA established a coordination center in Baghdad to oversee the receipt and delivery of Oil for Food commodities. The CPA authorized 48 coalition positions, to be assisted by Iraqis from various ministries. However, according to several U.S. and U.N. officials, the CPA had insufficient staff to manage the program and high staff turnover. As of mid-December 2003, the center had 19 coalition staff, including 18 staff whose tours ended in January 2004. U.S. and WFP officials stated that the staff assigned at the time of the transfer lacked experience in managing and monitoring the import and distribution of goods. A former CPA official stated that the Oil for Food program had been thrust upon an already overburdened and understaffed CPA. As a result, 251 contracts had not been renegotiated prior to the time of the transfer and the CPA asked the Defense Contract Management Agency to continue the renegotiation process. A November 2003 WFP report placed part of the blame in food shortfalls during the fall of 2003 on OIP delays in releasing guidelines for the contract prioritization and renegotiation process. A September 2003 U.N. report also noted that the transfer process in the northern governates was slowing due to an insufficient number of CPA counterparts to work with U.N. staff on transition issues. The center’s capacity improved in March 2004 when its coalition staff totaled 37. By April 2004, the coordination center had 16 coalition staff. Up to 40 Iraqi ministry staff are currently working on Oil for Food contracts. As of April 1, the coordination center’s seven ministry advisors have begun working with staff at their respective ministries as the first step in moving control of the program to the Iraqi government. According to U.S. officials and documents, CPA’s failed plans to privatize the food distribution system and delayed negotiations with WFP to administer the system resulted in diminished stocks of food commodities and localized shortages. Before the transfer of the Oil for Food program, the CPA administrator proposed to eliminate Iraq’s food distribution system and to provide former recipients with cash payments. He asserted that the system was expensive and depressed the agricultural sector, and the Ministry of Trade began drawing down existing inventories of food. In December 2003, as the security environment worsened, the CPA administrator reversed his decision to reform the food ration system and left the decision to the provisional Iraqi government. In January 2004, CPA negotiated a memorandum of understanding (MOU) with WFP and the Ministry of Trade that committed WFP to procuring a 3- month emergency food stock by March 31, 2004 and providing technical support to the CPA and Ministry of Trade. Delays in signing the MOU were due to disagreements about the procurement of emergency food stocks, contract delivery terms, and the terms of WFP’s involvement. No additional food was procured during the negotiations, and food stocks diminished and localized shortages occurred in February and March 2004. The CPA and WFP addressed these problems with emergency procurements from nearby countries. An April WFP report projected a continued supply of food items through May 2004 except for a 12-percent shortage in milk. Only 55 percent of required domestic wheat has been procured for July 2004 and no domestic wheat has been procured for August. Under the terms of MOU, WFP’s commitment to procuring food stock ended March 31, 2004. The Ministry of Trade assumed responsibility for food procurement on April 1, 2004. According to a U.S. official, coordination between WFP and the Ministry of Trade has been deteriorating. The Ministry has not provided WFP with complete and timely information on monthly food allocation plans, weekly stock reports, or information on cargo arrivals, as the MOU required. WFP staff reported that the Ministry’s data are subject to sudden, large, and unexplained stock adjustments, thereby making it difficult to plan deliveries. The security environment in Iraq has also affected planning for the transfer and movement of Oil for Food goods in fall 2003. The transfer occurred during a period of deteriorating security conditions and growing violence in Iraq. A September 2003 U.N. report found that the evacuation of U.N. personnel from Baghdad affected the timetable and procedures for the transfer of the Oil for Food program to the CPA and contributed to delays in the contract prioritization and renegotiation processes. Most WFP staff remained in Amman and other regional offices and continued to manage the Oil for Food program from those locations. The August bombing of the U.N. Baghdad headquarters also resulted in the temporary suspension of the border inspection process and shipments of humanitarian supplies and equipment. A March 2004 CPA report also noted that stability of the food supply would be affected if security conditions worsened. The history of inadequate oversight and corruption in the Oil for Food program raises questions about the Iraqi government’s ability to manage the import and distribution of Oil for Food commodities and the billions in international assistance expected to flow into the country. In addition, the food distribution system created a dependency on food subsidies that disrupted private food markets. The government will have to decide whether to continue, reform, or eliminate the current system. The CPA and Iraqi ministries must address corruption in the Oil for Food program to help ensure that the remaining contracts are managed with transparent and accountable controls. Building these internal control and accountability measures into the operations of Iraqi ministries will also help safeguard the $18.4 billion in fiscal year 2004 U.S. reconstruction funds and at least $13.8 billion pledged by other countries. To address these concerns and oversee government operations, the CPA administrator announced the appointment of inspectors general for 21 of Iraq’s 25 national ministries on March 30, 2004. At the same time, the CPA announced the establishment of two independent agencies to work with the inspectors general—the Commission on Public Integrity and a Board of Supreme Audit. Finally, the United States will spend about $1.63 billion on governance-related activities in Iraq, which will include building a transparent financial management system in Iraq’s ministries. CPA’s coordination center continues to provide on-the-job training for ministry staff who will assume responsibility for Oil for Food contracts after July 2004. Coalition personnel have provided Iraqi staff with guidance on working with suppliers in a fair and open manner and determining when changes to letters of credit are appropriate. In addition, according to center staff, coalition and Iraqi staff signed a code of conduct, which outlined proper job behavior. Among other provisions, the code of conduct prohibited kickbacks and secret commissions from suppliers. The center also developed a code of conduct for suppliers. In addition, the center has begun identifying the steps needed for the transition of full authority to the Iraqi ministries. These steps include transferring contract- related documents, contacting suppliers, and providing authority to amend contracts. In addition, the January 2004 MOU agreement commits WFP to training ministry staff in the procurement and transport functions currently conducted by WFP. Training is taking place at WFP headquarters in Rome, Italy. After the CPA transfers responsibility for the food distribution system to the Iraqi provisional government in July 2004, the government will have to decide whether to continue, reform, or eliminate the current system. Documents from the Ministries of Trade and Finance indicate that the annual cost of maintaining the system is as high as $5 billion, or about 25 percent of total government expenditures. In 2005 and 2006, expenditures for food will be almost as much as all expenditures for capital projects. According to a September 2003 joint U.N. and World Bank needs assessment of Iraq, the food subsidy, given out as a monthly ration to the entire population, staved off mass starvation during the time of the sanctions, but at the same time it disrupted the market for food grains produced locally. The agricultural sector had little incentive to produce crops in the absence of a promising market. However, the Iraqi government may find it politically difficult to scale back the food distribution system with 60 percent of the population relying on monthly rations as their primary source of nutrition. WFP is completing a vulnerability assessment that Iraq could use to make future decisions on food security programs and better target food items to those most in need. Mr. Chairman and Members of the Committee, this concludes my prepared statement. I will be happy to answer any questions you may have. For questions regarding this testimony, please call Joseph Christoff at (202) 512-8979. Other key contributors to this statement were Pamela Briggs, Lyric Clark, Lynn Cothern, Jeanette Espinola, Zina Merritt, Tetsuo Miyabara, José M. Peña, III, Stephanie Robinson, Jonathan Rose, Richard Seldin, Audrey Solis, and Phillip Thomas. Iraqi forces invaded Kuwait. Resolution 660 condemned the invasion and demanded immediate withdrawal from Kuwait. Imposed economic sanctions against the Republic of Iraq. The resolution called for member states to prevent all commodity imports from Iraq and exports to Iraq, with the exception of supplies intended strictly for medical purposes and, in humanitarian circumstances, foodstuffs. President Bush ordered the deployment of thousands of U.S. forces to Saudi Arabia. Public Law 101-513 prohibited the import of products from Iraq into the United States and export of U.S. products to Iraq. Iraq War Powers Resolution authorized the president to use “all necessary means” to compel Iraq to withdraw military forces from Kuwait. Operation Desert Storm was launched: Coalition operation was targeted to force Iraq to withdraw from Kuwait. Iraq announced acceptance of all relevant U.N. Security Council resolutions. U.N. Security Council Resolution 687 (Cease-Fire Resolution) Mandated that Iraq must respect the sovereignty of Kuwait and declare and destroy all ballistic missiles with a range of more than 150 kilometers as well as all weapons of mass destruction and production facilities. The U.N. Special Commission (UNSCOM) was charged with monitoring Iraqi disarmament as mandated by U.N. resolutions and to assist the International Atomic Energy Agency in nuclear monitoring efforts. Proposed the creation of an Oil for Food program and authorized an escrow account to be established by the Secretary General. Iraq rejected the terms of this resolution. Second attempt to create an Oil for Food program. Iraq rejected the terms of this resolution. Authorized transferring money produced by any Iraqi oil transaction on or after August 6, 1990, which had been deposited into the escrow account, to the states or accounts concerned as long as the oil exports took place or until sanctions were lifted. Allowed Iraq to sell $1 billion worth of oil every 90 days. Proceeds were to be used to procure foodstuffs, medicine, and material and supplies for essential civilian needs. Resolution 986 was supplemented by several U.N. resolutions over the next 7 years that extended the Oil for Food program for different periods of time and increased the amount of exported oil and imported humanitarian goods. Established the export and import monitoring system for Iraq. Signed a memorandum of understanding allowing Iraq’s export of oil to pay for food, medicine, and essential civilian supplies. Based on information provided by the Multinational Interception Force (MIF), communicated concerns about alleged smuggling of Iraqi petroleum products through Iranian territorial waters in violation of resolution 661 to the Security Council sanctions committee. Committee members asked the United States for more factual information about smuggling allegations, including the final destination and the nationality of the vessels involved. Provided briefing on the Iraqi oil smuggling allegations to the sanctions committee. Acknowledged that some vessels carrying illegal goods and oil to and from Iraq had been using the Iranian flag and territorial waters without authorization and that Iranian authorities had confiscated forged documents and manifests. Representative agreed to provide the results of the investigations to the sanctions committee once they were available. Phase I of the Oil for Food program began. Extended the term of resolution 986 another 180 days (phase II). Authorized special provision to allow Iraq to sell petroleum in a more favorable time frame. Brought the issue of Iraqi smuggling petroleum products through Iranian territorial waters to the attention of the U.N. Security Council sanctions committee. Coordinator of the Multinational Interception Force (MIF) Reported to the U.N. Security Council sanctions committee that since February 1997 there had been a dramatic increase in the number of ships smuggling petroleum from Iraq inside Iranian territorial waters. Extended the Oil for Food program another 180 days (phase III). Raised Iraq’s export ceiling of oil to about $5.3 billion per 6-month phase (phase IV). Permitted Iraq to export additional oil in the 90 days from March 5, 1998, to compensate for delayed resumption of oil production and reduced oil price. Authorized Iraq to buy $300 million worth of oil spare parts to reach the export ceiling of about $5.3 billion. Public Law 105-235, a joint resolution finding Iraq in unacceptable and material breach of its international obligations. Oct. 31, 1998 U.S. legislation: Iraq Liberation Act Public Law 105-338 §4 authorized the president to provide assistance to Iraqi democratic opposition organizations. Iraq announced it would terminate all forms of interaction with UNSCOM and that it would halt all UNSCOM activity inside Iraq. Renewed the Oil for Food program for 6 months beyond November 26 at the higher levels established by resolution 1153. The resolution included additional oil spare parts (phase V). Following Iraq’s recurrent blocking of U.N. weapons inspectors, President Clinton ordered 4 days of air strikes against military and security targets in Iraq that contribute to Iraq’s ability to produce, store, and maintain weapons of mass destruction and potential delivery systems. President Clinton provided the status of efforts to obtain Iraq’s compliance with U.N. Security Council resolutions. He discussed the MIF report of oil smuggling out of Iraq and smuggling of other prohibited items into Iraq. Renewed the Oil for Food program another 6 months (phase VI). Permitted Iraq to export an additional amount of $3.04 billion of oil to make up for revenue deficits in phases IV and V. Extended phase VI of the Oil for Food program for 2 weeks until December 4, 1999. Extended phase VI of the Oil for Food program for 1 week until December 11, 1999. Renewed the Oil for Food program another 6 months (phase VII). Abolished Iraq’s export ceiling to purchase civilian goods. Eased restrictions on the flow of civilian goods to Iraq and streamlined the approval process for some oil industry spare parts. Also established the United Nations Monitoring, Verification and Inspection Commission (UNMOVIC). Increased oil spare parts allocation from $300 million to $600 million under phases VI and VII. Renewed the Oil for Food program another 180 days until December 5, 2000 (phase VIII). Extended the Oil for Food program another 180 days (phase IX). Ambassador Cunningham acknowledged Iraq’s illegal re-export of humanitarian supplies, oil smuggling, establishment of front companies, and payment of kickbacks to manipulate and gain from Oil for Food contracts. Also acknowledged that the United States had put holds on hundreds of Oil for Food contracts that posed dual-use concerns. Ambassador Cunningham addressed questions regarding allegations of surcharges on oil and smuggling. Acknowledged that oil industry representatives and other Security Council members provided the United States anecdotal information about Iraqi surcharges on oil sales. Also acknowledged companies claiming they were asked to pay commissions on contracts. Extended the terms of resolution 1330 (phase IX) another 30 days. Renewed the Oil for Food program an additional 150 days until November 30, 2001 (phase X). The resolution stipulated that a new Goods Review List would be adopted and that relevant procedures would be subject to refinement. Renewed the Oil for Food program another 180 days (phase XI). UNMOVIC reviewed export contracts to ensure that they contain no items on a designated list of dual-use items known as the Goods Review List. The resolution also extended the program another 180 days (phase XII). MIF reported that there had been a significant reduction in illegal oil exports from Iraq by sea over the past year but noted oil smuggling was continuing. Extended phase XII of the Oil for Food program another 9 days. Renewed the Oil for Food program another 180 days until June 3, 2003 (phase XIII). Approved changes to the list of goods subject to review and the sanctions committee. Chairman reported on a number of alleged sanctions violations noted by letters from several countries and the media from February to November 2002. Alleged incidents involved Syria, India, Liberia, Jordan, Belarus, Switzerland, Lebanon, Ukraine, and the United Arab Emirates. Operation Iraqi Freedom is launched. Coalition operation led by the United States initiated hostilities in Iraq. Adjusted the Oil for Food program and gave the Secretary General authority for 45 days to facilitate the delivery and receipt of goods contracted by the Government of Iraq for the humanitarian needs of its people. Public Law 108-11 §1503 authorized the President to suspend the application of any provision of the Iraq Sanctions Act of 1990. Extended provision of resolution 1472 until June 3, 2003. End of major combat operations and beginning of post-war rebuilding efforts. Lifted civilian sanctions on Iraq and provided for the end of the Oil for Food program within 6 months, transferring responsibility for the administration of any remaining program activities to the Coalition Provisional Authority (CPA). Transferred administration of the Oil for Food program to the CPA. Responded to allegations of fraud by U.N. officials that were involved in the administration of the Oil for Food program. Proposed that a special investigation be conducted by an independent panel. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Oil for Food program was established by the United Nations and Iraq in 1996 to address concerns about the humanitarian situation after international sanctions were imposed in 1990. The program allowed the Iraqi government to use the proceeds of its oil sales to pay for food, medicine, and infrastructure maintenance. The program appears to have helped the Iraqi people. From 1996 through 2001, the average daily food intake increased from 1,300 to 2,300 calories. From 1997-2002, Iraq sold more than $67 billion of oil through the program and issued $38 billion in letters of credit to purchase commodities. GAO (1) reports on its estimates of the revenue diverted from the program, (2) provides preliminary observations on the program's administration, (3) describes some challenges in its transfer to the CPA, and (4) discusses the challenges Iraq faces as it assumes program responsibility. GAO estimates that from 1997-2002, the former Iraqi regime attained $10.1 billion in illegal revenues from the Oil for Food program, including $5.7 billion in oil smuggled out of Iraq and $4.4 billion through surcharges on oil sales and illicit commissions from suppliers exporting goods to Iraq. This estimate includes oil revenue and contract amounts for 2002, updated letters of credit from prior years, and newer estimates of illicit commissions from commodity suppliers. Both the U.N. Secretary General, through the Office of the Iraq Program (OIP) and the Security Council, through its sanctions committee for Iraq, were responsible for overseeing the Oil for Food Program. However, the Iraq government negotiated contracts directly with purchasers of Iraqi oil and suppliers of commodities, which may have been one important factor that allowed Iraq to levy illegal surcharges and commissions. While OIP was responsible for examining Iraqi contracts for price and value, it is unclear how it performed this function. The sanctions committee was responsible for monitoring oil smuggling, screening contracts for items that could have military uses, and approving oil and commodity contracts. While the sanctions committee responded to illegal surcharges on oil, it is unclear what actions it took to respond to illicit commissions on commodity contracts. OIP transferred 3,059 Oil for Food contracts--with pending shipments valued at $6.2 billion--to the CPA on November 22, 2003. However, the CPA stated that it has not received all the original contracts, amendments, and letters of credit it needs to manage the program. These problems, along with inadequate CPA staffing during the transfer, hampered the efforts of CPA's Oil for Food coordination center in Baghdad to ensure continued delivery of commodities. Poor planning, coordination, and the security environment in Iraq continue to affect the execution of these contracts. Inadequate oversight and corruption in the Oil for Food program raise concerns about the Iraqi government's ability to import and distribute Oil for Food commodities and manage at least $32 billion in expected donor reconstruction funds. The CPA has taken steps, such as appointing inspectors general, to build internal control and accountability measures at Iraq's ministries. The CPA and the World Food Program (WFP) are also training ministry staff to help them assume responsibility for Oil for Food contracts in July 2004. The new government will have to balance the reform of its costly food subsidy program with the need to maintain food stability and protect the poorest populations.
Midway Atoll National Wildlife Refuge and Battle of Midway National Memorial is within one of the nation’s largest conservation areas—the 139,793-square-mile Papahānaumokuākea Marine National Monument— and is the location of the Battle of Midway, which was one of the most decisive battles of World War II. The atoll is roughly 5 miles in diameter and includes three islands—Sand Island (1,117 acres), Eastern Island (336 acres), and Spit Island (7 acres). Nearly 3 million birds nest much of each year on Midway, including albatross (Laysan, black-footed, and short-tailed), Bonin petrels, and Laysan ducks. The Laysan albatross is a large bird, with a wing span of up to 6 feet. Beautiful in flight, but ungainly in their movement on land, the albatross were called “gooney birds” by those stationed on Midway Atoll (Midway) during World War II. In 1997, when the Navy ceremonially transferred Midway to the U.S. Fish and Wildlife Service, the Secretary of the Navy cited the changing mission as “trading guns for goonies.” Laysan albatross are monogamous. If one mate should die, the other will likely create a new pair bond. Seventy-one percent of the world’s population nests on Midway. Laysan albatross feed primarily at night on anything that floats on the surface of the water, such as squid and fish, and also on marine debris, with an estimated 5 tons of plastic being accidently fed to chicks each year. Their typical life span is 12 to 40 years. The world’s oldest known banded bird in the wild is a Laysan albatross named Wisdom (left in above photo with her mate, Akeakamai), who is at least 65 years old and has likely raised at least 37 chicks, according to the U.S. Fish and Wildlife Service. The seasons on Midway are marked by the annual arrival of the first albatross in late October, and their courting, mating, nesting, and fledging activities until their departure in July. Green turtles, spinner dolphins, and endangered Hawaiian monk seals also frequent Midway’s lagoon. See figure 1 for a map of the Papahānaumokuākea Marine National Monument showing the location of Midway, which lies about 1,300 miles from Honolulu. The first westerner to discover the atoll arrived in July 1859. Then in 1867, the Navy formally claimed the islands for the United States to establish a coaling station, supply depot, and emergency refuge for ships traveling between the West Coast of the United States and eastern Asia. In 1903, an executive order placed Midway under the jurisdiction and control of the Navy. Construction of a naval air station began in 1940, and the station was commissioned on August 1, 1941. The naval air station included barracks; officers’ housing; a theater; a seaplane hangar; a three-runway airfield on Eastern Island; and a defensive system, including gun emplacements, ammunition storage, and pillboxes. On December 7, 1941—the day Pearl Harbor was attacked—Midway was attacked by Japanese ships, causing casualties and extensive damage to buildings, including the seaplane hangar. During the Battle of Midway, fought June 4 through 6, 1942, the Japanese launched an attack against Midway in the hope of engaging and destroying the U.S. aircraft carriers and occupying Midway. U.S. fleet aircraft ambushed the Japanese fleet north of the islands, thereby inflicting heavy losses (four aircraft carriers), an action credited with turning the tide of the war in the Pacific. Following World War II, the Navy retained jurisdiction over Midway and maintained it as a naval air station, which was supported by a town of up to 5,000 civilians and military service families. In the 1970s and 1980s, the Navy began downsizing operations on Midway. In 1988, the Navy invited FWS to manage the atoll’s extensive wildlife resources, and Midway became an overlay national wildlife refuge since the Navy retained primary jurisdiction. In 1993, the Defense Base Closure and Realignment Commission recommended that the mission of the naval air station on Midway be eliminated. The Navy began operations to close the air station, thereby beginning a transition in primary mission from national defense to wildlife conservation. Figure 2 shows key events and legislation in Midway’s history, including the timing of Midway’s historic and other designations. (See app. II for a fuller chronology of Midway events.) As part of the base closure process, the Navy conducted environmental studies that indicated widespread contamination based on introduction of a variety of man-made materials to the environment and the native wildlife, such as the presence of lead-based paint in soil, which can be toxic to birds on Midway (e.g., up to 3 percent of Laysan albatross hatchlings die from lead poisoning each year). In addition, the Navy conducted cultural resources surveys in 1993 and 1994 that identified buildings, structures, objects, and sites on both Sand and Eastern Islands. Based on these surveys, 78 properties were determined to be historic properties eligible for inclusion in the National Register of Historic Places, including 9 properties previously designated as the Midway National Historic Landmark. Under section 106 of the National Historic Preservation Act, federal agencies are to consider the effects of their undertakings on historic properties that are listed in or eligible for listing in the National Register of Historic Places prior to the approval of the undertaking. For such undertakings, the regulations implementing section 106 require federal agencies to consult with the relevant state historic preservation officer, individuals, or organizations that request and/or are invited to be consulting parties and others. Since Midway is an unincorporated territory and has no state historic preservation officer, the Advisory Council on Historic Preservation instructed FWS officials to consult with the state historic preservation office for Hawaii, which is the nearest state. The section 106 consultation is a process of seeking, discussing, and considering the views of other participants and, where feasible, seeking agreement with them on resolving adverse effects on historic properties and other matters in the section 106 process before a federal agency approves an undertaking. If the consultation results in an agreement on how to resolve the adverse effects, the federal agency usually enters into a memorandum of agreement (MOA) with the relevant state historic preservation officer, which the regulations require the agency to provide to the Advisory Council on Historic Preservation after it is signed and before approving the undertaking. In addition to consultation, the regulations implementing section 106 require agencies to provide an opportunity for the public to express views on resolving the undertaking’s adverse effects. According to the regulations, the views of the public are essential to informed federal decision making in the section 106 process. Moreover, the regulations require agencies to seek and consider the views of the public in a manner that reflects, among other things, the nature and complexity of the undertaking and its effects on historic properties and the likely interest of the public in the effects on historic properties. The regulations also require agencies, in consultation with state historic preservation officers, to plan for involving the public in the section 106 process. In lieu of conducting multiple section 106 processes, federal agencies may negotiate programmatic agreements to govern the resolution of adverse effects from multiple undertakings. The regulations implementing section 106 require agencies, when developing programmatic agreements, to consult with, as appropriate, the relevant state historic preservation officer and others and provide for appropriate public participation. In 1996, the Navy and FWS along with the Advisory Council on Historic Preservation signed a programmatic agreement directing how Midway’s 78 historic properties were to be treated during the closure of the Naval Air Facility and transfer of these properties. The agreement defined the following six levels of treatment for the properties: Reuse: The agreement identified 23 properties to be used and maintained in support of refuge operations, including the officers’ housing, theater, barracks, shops, and industrial facilities. Secure: The agreement identified 13 properties to be secured by the Navy to minimize hazards to wildlife and people, including the power plant/command center that was shelled on December 7, 1941, and the cable station complex. Leave as is: The agreement identified 20 properties that would be left in “as is” condition and would not be used under refuge management, other than for interpretive purposes, including the runways on Eastern Island and various bunkers, pillboxes, and gun batteries. Fill: The agreement indicated that four properties would be filled with sand, including pillboxes on Sand and Eastern Islands and two ammunition storage huts. Demolish: The agreement called for demolishing 15 historic properties that were of secondary historical importance, were in very poor condition, or were redundant to other resources being maintained; the properties included a motor pool building, a laundry, the naval operating base armory, and airport storage buildings. Relocate: The agreement listed three properties to be moved to enhance their protection and interpretation: a torpedo and inert bomb, submarine net, and metal pillbox. The Navy transferred to FWS the 63 historic properties that remained after the 15 demolitions as well as a number of buildings and other infrastructure (i.e., equipment and related buildings that provide electricity, water, and other support services) that had been used to support the base. On June 30, 1997, the last Navy personnel departed. (See app. IV for pictures of all the historic properties in existence in 2015 and selected other properties on Midway.) When Midway was transferred from the Navy in 1996, Executive Order 13022 directed FWS to provide opportunities for scientific research, environmental education, and certain recreational activities in a manner consistent with the executive order on management and general public use of refuges. In 1996, FWS entered into a cooperative agreement with the Midway Phoenix Corporation for support of a public use program, which required the corporation to operate Henderson Airfield; maintain the harbor; operate, maintain, and/or provide utilities (i.e., electrical system, sewage, television, and telephone); implement a grounds maintenance plan; and provide basic and emergency medical care. In addition, the agreement authorized the Midway Phoenix Corporation to provide food service, lodging, and maritime rescue services. After the Midway Phoenix Corporation discontinued operations on Midway in 2002, FWS did not operate an ongoing public visitation program but allowed occasional visitation, primarily through tour providers and cruise ships. Subsequently, in accordance with its regulations and guidance implementing the National Wildlife Refuge System Improvement Act of 1997, FWS determined that a number of recreational uses were both appropriate and compatible with the refuge’s wildlife conservation purposes, such as wildlife observation and photography, environmental education, interpretation, snorkeling, and diving. FWS reestablished a public visitation program in 2007 and then suspended public use in November 2012. (See app. III for additional information on public visitation to Midway.) In February 2010, FWS and the other co-trustees of the Papahānaumokuākea Marine National Monument received a notice of intent to sue from a nongovernmental organization for violations of several laws on Midway because, among other things, the lead-based paint used to paint some buildings in the 1950s and 1960s was causing lead poisoning in some birds. FWS agreed in a 2012 settlement with the nongovernmental organization to complete, over the course of 7 years, a “non-time critical removal action” under CERCLA, which governs the cleanup of releases or threatened releases of hazardous substances, to address the lead-based paint on buildings. The regulations implementing CERCLA require removal actions to comply with applicable requirements under federal and state laws, among other things. FWS determined that the National Historic Preservation Act was an applicable requirement for removal actions involving historic properties on Midway and decided that it would work with the Hawaii State Historic Preservation Officer to address the removal action’s adverse effects on historic properties. When completing removal actions that are not time critical, the regulations require federal agencies to publish a document containing the analysis of removal alternatives, provide for at least a 30-day public comment period on the document, and issue a decision document selecting a removal alternative that responds to the public comments. The Midway Atoll National Wildlife Refuge’s funding consists of both operational and project-specific funding. According to FWS budget officials we interviewed, after increasing each year for fiscal years 2009 through 2011, the agency’s operations funding for Midway decreased substantially in fiscal years 2012 and 2013, while project-specific funding varied, with no clear trend, for fiscal years 2009 through 2015. The officials attributed the decrease in the operations budget to a decrease in funding from FWS Region 1 and a decrease in fees collected on Midway after the public visitation program ended. FWS officials also told us that Midway’s funding for deferred maintenance has varied based on regional priorities, and funding for other projects has varied based on need or availability of funds dedicated to particular uses, such as CERCLA response actions. Midway’s operations budget contains three different funding streams: funds allocated by FWS Region 1 to Midway; fees that are collected on Midway and that Midway is authorized to use for specified purposes; and funding from FAA for airport operations. According to FWS officials, the overall funding for operations for Midway has substantially decreased since fiscal year 2011 because of a decrease of about $1 million in allocations from Region 1 and about $0.8 million in fees collected since the end of public visitation in 2012. According to an FWS budget official we interviewed, after increasing from fiscal year 2009 to 2011 to a peak of more than $4 million, Region 1’s funding of operations for Midway decreased in fiscal years 2012 and 2013. According to several officials, regional funding over those 2 years decreased more than $1 million from the fiscal year 2011 funding level, a substantial decrease. The FWS budget official told us that the agency’s allocation for Midway of over $3 million in fiscal year 2015 remained at more than $900,000 below the 2011 allocation. FWS officials said that decreases in Midway’s allocation were due to flat and declining budgets overall for FWS Region 1 and to sequestration in 2013. These officials said that given decreases in Midway’s operations budget, FWS eliminated functions supporting visitor services, including personnel responsible for implementing and overseeing the public visitation program and for enforcing the regulations of the wildlife refuge, to maintain core functions consistent with Midway’s status as a wildlife refuge––that is, protecting wildlife. According to FWS officials, fees collected for Midway’s operations have decreased since fiscal year 2012. FWS is authorized to charge and retain reasonable fees for services provided at the refuge, such as fuel sales, to use for specified purposes. Overall operations funding from such fees declined substantially following the end of public visitation in 2012, from $1.3 million in fiscal year 2012 to about a half million in fiscal year 2015, according to an FWS budget official. FWS also collects fees from aircraft, both commercial and military, for landing on Midway. Landing fees cover costs, beyond normal operations, incurred by FWS from the aircraft landings. For example, a commercial aircraft flying from Honolulu to Guam made an emergency landing on Midway on July 10, 2014, and reimbursed FWS $86,000 to cover costs such as paying for airport staff overtime, providing food for passengers, and providing fuel, according to FWS officials. An agreement between FWS and FAA specifies that FAA is responsible for reimbursing FWS for the costs for operating and maintaining the airport. FAA reimburses FWS about $3 million per year––the amount is negotiated annually––for these costs. For certain overhead costs for the airport––costs that cannot be clearly distinguished between Henderson Airfield and refuge operations––FWS and FAA have a cost-sharing arrangement that generally determines the costs according to a formula based on the proportion of full-time equivalent staff on Midway who provide support for the refuge versus the airport. Overhead costs shared according to this formula include electricity, fuel, water, and waste management. FAA officials said that how best to share overhead costs remains under discussion between FWS and FAA. Some projects on Midway are not funded through its operating budget but through funds dedicated to specific purposes, such as deferred maintenance, CERCLA response actions, and capital improvements to the airport. For fiscal years 2009 through 2015, Midway’s annual allocation from FWS Region 1 for deferred maintenance projects has varied, with a high of about $1.3 million in fiscal year 2014 and a low of approximately $81,000 in fiscal year 2015, according to an FWS official, with no clear trends. Officials told us that deferred maintenance funds go to projects such as restoration of historic buildings and improvements to infrastructure, such as sewer systems. FWS officials also said that the total cost to address Midway’s outstanding deferred maintenance needs is not known because estimating the costs of such projects is itself expensive. For example, according to FWS facilities maintenance officials, it cost $193,000 to estimate the cost of deferred maintenance for just one historic structure— the Seaplane Hangar—which included the costs to transport and house potential bidders on Midway as well as to transport and house three to four staff members from the winning bidder to conduct work necessary for developing the estimate. In 2010, FWS estimated the cost to stabilize (not restore) that structure was about $18 million—several times the agency’s annual budget for such projects in the Pacific region, according to FWS officials. Other projects on Midway that are not included in the operations budget are funded through other sources within the Department of the Interior. For example, the Department of the Interior’s Central Hazardous Materials Fund—an appropriation available to pay for the department’s removal actions under CERCLA––provided over $20 million from fiscal year 2010 to 2015 to perform lead-based paint abatement projects on Midway, according to FWS officials. Similarly, about $2 million in American Recovery and Reinvestment Act of 2009 (Recovery Act) funds were used to refurbish the historic officers’ housing and for energy efficiency projects on Midway in 2010; these funds were paid by the FWS regional office directly to contractors and were not included in Midway’s budget. For Henderson Airfield, FAA, under an interagency agreement for capital improvements with FWS, also provides up to $2.5 million annually to pay for capital improvement projects, such as resurfacing runways and taxiways. FWS has maintained most historic properties on Midway but has demolished others without providing adequate public notification and seeking public comment and input, which is not consistent with the regulations implementing section 106 of the National Historic Preservation Act. More specifically, FWS has maintained most of the historic properties on Midway in accordance with a 1996 programmatic agreement with key stakeholders and historic preservation plans it developed. However, FWS has demolished seven historic properties and completed another undertaking with adverse effects on historic properties that was not contemplated by the agreement without directly notifying parties who had previously expressed interest in historic preservation issues on Midway (hereafter referred to as key parties) and seeking their comments and input, and, in some cases, without providing public notification or opportunity for the public to express its views on resolving the adverse effects. In addition, FWS did not conduct consultations before approving three of the four undertakings even though, according to Advisory Council on Historic Preservation officials, the intent of consultation is to inform agency decision making. FWS has maintained most of Midway’s historic properties as specified in the 1996 programmatic agreement and the historic preservation plans it developed. The 1996 programmatic agreement signed by the Navy, FWS, and Advisory Council on Historic Preservation covered actions the Navy would take before departing Midway, such as securing certain properties from intrusion by wildlife or demolishing properties that were badly deteriorated, and specified that FWS would reuse and maintain 23 of the 63 historic properties that would remain on Midway after the Navy’s departure. As required by the programmatic agreement, FWS developed its first historic preservation plan in 1999, followed by an updated plan in 2010. The 1999 plan included the following five management categories for the properties, all of which were consistent with the earlier agreement with the Navy: reuse (23 properties), secure (13 properties), leave as is (20 properties), fill with sand (4 properties), and relocate (3 properties). A sixth category, the demolition of 15 properties, was carried out by the Navy prior to transferring the management of Midway to FWS. The 2010 updated historic preservation plan revised the management categories to account for issues that arose since 1999 and to conform with the Secretary of the Interior’s Standards for the Treatment of Historic Properties. Since 1996, FWS has reused 19 of the 23 properties planned for reuse, in some cases rehabilitating them. For example, as of June 2015, FWS rehabilitated and was using as housing for residents structures that once provided housing for Navy officers (see table 1). In addition, FWS put into use 3 properties, including a paint shop, which had been categorized as “leave as is” by the 1996 programmatic agreement, because the agency had a need for them. Since 2012, FWS has also demolished seven historic properties––actions that were not contemplated by the 1996 programmatic agreement or the 2010 historic preservation plan (see table 2 and figs. 3, 4, and 5). For undertakings with adverse effects on historic properties not contemplated by the 1996 programmatic agreement and 2010 historic preservation plan, FWS is to complete the section 106 process before deciding to undertake such actions but FWS did not do so for four undertakings affecting eight historic properties. The four undertakings included demolishing seven buildings and rehabilitating a cable station building with materials from three other cable station buildings that were dismantled (see table 3). For all four of these undertakings, we found that FWS did not adequately notify the public and seek comments and input on the adverse effects the undertakings would have on the historic properties. More specifically: For the undertaking in 2009, FWS did not make information about the planned actions available to the public or provide an opportunity for the public to express views on resolving the adverse effects of the undertaking consistent with the regulations implementing section 106 of the National Historic Preservation Act. For the two undertakings that demolished the cable station buildings and Marine barracks, FWS published a notice about the availability of its analysis of removal alternatives on its website and in newspapers in January 2011, as required under CERCLA. However, FWS did not directly notify key parties that were involved in prior historic preservation matters on Midway, such as the 2010 Historic Preservation Plan. For example, FWS did not directly inform the International Midway Memorial Foundation and other parties that had served on the defunct Battle of Midway National Memorial Advisory Committee about the CERCLA removal action. According to Advisory Council on Historic Preservation officials, because FWS knew these parties had an interest in historic preservation issues at Midway, the method of notifying these parties about these undertakings with adverse effects on historic properties was not sufficient under section 106 unless these parties received the notice directly. For the undertaking that demolished the SK1 warehouse, the document containing the analysis of removal alternatives made available to the public in January 2011 did not include demolition as an alternative, so the public and key parties did not have any notice about FWS’s decision to demolish it, either under the CERCLA or the section 106 process. As a result, FWS’s decisions regarding all four undertakings that had adverse effects on historic properties were not informed by public comment and input, straining its relationship with at least one key stakeholder. According to the regulations implementing section 106 of the National Historic Preservation Act, the views of the public are essential to informed federal decision making in the section 106 process. For all four of these undertakings, FWS consulted with the Hawaii State Historic Preservation Office and the Historic Hawaii Foundation but did not receive requests from other interested parties to participate in the consultations. However, since the public and potentially interested parties did not have adequate notice about these four planned undertakings and section 106 consultations, it is not clear that they would have known to request participation in the consultations. FWS notified three parties for the four undertakings, which is fewer than for other agreements, plans, or activities. For example, FWS and the Navy notified seven parties about the 1996 programmatic agreement, and FWS distributed the 2010 Historic Preservation Plan for Midway directly to 10 parties. According to Advisory Council on Historic Preservation officials, it was unlikely that these key parties could have requested to participate in these consultations since, as described above, they did not have adequate notice of the planned undertakings. In addition, according to Advisory Council on Historic Preservation officials, the intent of the section 106 consultation requirements is for such consultations to inform the agency’s decision making. However, FWS did not conduct consultations before approving three of the four undertakings. While FWS consulted with the Hawaii State Historic Preservation Officer and the Hawaii Historic Foundation and entered into an MOA with the former to resolve the adverse effects of the 2009 undertaking on the cable station buildings, FWS did not conduct section 106 consultations before issuing its final CERCLA action memorandum selecting demolition for the four cable station buildings and the two Marine barracks in July 2011. The memorandum acknowledged that consultation had not occurred but stated that FWS was to work with the Hawaii State Historic Preservation Officer to reach agreement on how to treat these buildings. Subsequently, FWS consulted with the Hawaii State Historic Preservation Office and Hawaii Historic Foundation about the CERLCA removal actions demolishing the four cable station buildings, Marine barracks, and SK1 warehouse. The consultations resulted in FWS and the Hawaii State Historic Preservation Officer signing MOAs for the demolition of the cable station buildings and SK1 warehouse but not for demolition of the Marine barracks. However, according to an FWS official, consultations over the Marine barracks lasted 5 months and in 2012 resulted in a draft MOA between FWS and the Hawaii State Historic Preservation Office. Although neither party formally signed the agreement at that time, they began to implement its provisions. These four undertakings were the first with adverse effects on historic properties under FWS management of Midway, according to FWS. But as properties continue to age and deteriorate in the future, FWS may need to take additional actions that have adverse effects on historic properties. Before approving such actions, FWS will need to complete the section 106 process detailed in regulations, which includes planning, in consultation with the state historic preservation officer, for involving the public in the process. An FWS official who works on historic preservation issues on Midway noted that the extent of public notification and involvement can vary based on the size of the undertaking. However, officials with the Advisory Council on Historic Preservation pointed out that another factor in the regulations that should be considered in determining the extent of seeking and considering the views of the public is the likely level of public interest in the action. They stated that the Midway interest groups that are known to have a high level of interest in Midway should have been notified of these undertakings and FWS should have sought and considered their views. FWS officials told us that they are taking steps to better coordinate with stakeholders on future actions. For example, as agreed with the Hawaii State Historic Preservation Office, FWS is consulting with stakeholders on developing a programmatic agreement for the treatment of a group of 18 historic buildings designed by architect Albert Kahn. According to an FWS official, the intent of developing a programmatic agreement for all the Kahn buildings is to consider actions taken on any of the buildings in the context of all the historical buildings and to improve public participation and standardize maintenance procedures and set priorities for maintenance of the Kahn buildings. This programmatic agreement, however, does not eliminate the need for FWS to complete the section 106 process, including notifying the public and providing an opportunity for the public to express their views for future undertakings with adverse effects on other historic properties that are outside of its scope. Without ensuring that key parties that have previously expressed interest in historic preservation issues on Midway are notified about future actions that may have an adverse effect on historic properties, FWS will not have reasonable assurance that it is adequately providing public notification and an opportunity for public comment under the section 106 process. FWS faces multiple challenges in reestablishing a public visitation program on Midway, such as deteriorating infrastructure and infrequent delivery of supplies. In addition, the planning documents needed to implement a visitor services program have not been updated to reflect operational changes on Midway, creating uncertainty about the resources that may be needed to reestablish a public visitation program. FWS has not had a public visitation program on Midway since November 2012 and faces multiple challenges to reestablishing such a program in the future. FWS officials we interviewed said they would like to reestablish a public visitation program to Midway but would need additional resources to do so. On the basis of our visit to Midway; interviews with FWS, NOAA, and Hawaii state officials and representatives from stakeholder groups; and review of FWS documents, we identified four key resource-related challenges to the reestablishment of a public visitation program: personnel, infrastructure, supplies, and transportation. According to FWS’s 2008 visitor services plan, the most recent FWS guidance for managing approved recreational activities and the visitor services program at Midway, personnel are essential for providing a quality public visitation program on Midway. FWS officials we interviewed said that to reestablish a public visitation program, the agency would need to fill six positions—four located on Midway and two in the FWS Pacific Islands Office—that were eliminated when the program was suspended in 2012. Personnel in these positions were responsible for, among other things, implementing and overseeing the public visitation program by coordinating with tour providers, developing recreational and educational programs on Midway, and enforcing the regulations of the wildlife refuge. In addition, FWS officials we interviewed said that the agency would also like to have an additional staff person help develop a mechanism, such as a concessionaire, that would allow an organization or company to operate a public visitation program on Midway. According to the agency’s 2008 visitor services plan, FWS can provide accommodations to no more than 50 overnight visitors on Midway at one time. However, the FWS refuge manager said that visitation may be limited to as few as 15 to 30 overnight visitors at one time because of the deteriorating infrastructure on Midway. The building that is available to house visitors, known as Bachelor Officer Quarters Charlie Barracks, is almost beyond the point of repairs and renovation, according to a condition assessment conducted by contractors in 2009. During our 2015 site visit, we observed that the first floor of Charlie Barracks had extensive water damage (see fig. 6). In addition, one of the guest rooms on the second floor occupied by a GAO analyst had a leaking ceiling. In 2009, a contractor estimated that updating and renovating Charlie Barracks, which was constructed in 1957, could cost up to $14 million and that replacing it could cost up to $20 million. According to FWS officials we interviewed, the estimate for renovating Charlie Barracks is likely understated, as the deterioration has continued since the completion of the most recent assessment in 2009. In addition to providing accommodations, reestablishing a public visitation program on Midway would likely entail increasing the capacity of existing infrastructure systems to support visitors. Specifically, FWS and its contractors operate and maintain the power system, water treatment and distribution, facilities maintenance, waste management systems, communications systems, and other operational necessities. Many of these infrastructure systems were designed to accommodate the naval air station and a population of up to 5,000 people and have deteriorated over time. FWS has been replacing these systems to accommodate a population of approximately 200 people. For example, in October 2007, FWS installed a new fuel system with the capacity of up to 450,000 gallons to be used primarily for electricity generation. This new system replaced an old fuel system with a capacity of about 4 million gallons, which was deteriorating and was demolished in October 2015, according to FWS officials. See figure 7 for our photographs of the old fuel system prior to its demolition and the new fuel system. Also, according to the 2008 Midway Atoll National Wildlife Refuge Conceptual Site Plan, a new drinking water treatment system and distribution main were placed into service in October 2005. The new treatment system was sized for a short-term maximum population of 200 persons at a per capita daily use rate of 100 gallons per day, totaling 20,000 gallons per day. However, according to the 2008 conceptual site plan, the actual efficient operating capacity is much lower, and a regular on-island population above 120 people would require added capacity. To routinely accommodate visitors to Midway, FWS officials said they would likely need to increase the amount of supplies delivered. According to these officials, the primary method for delivering supplies to Midway is by ship, with some supplies arriving on charter flights. In 2014, FWS decreased the number of shipments to Midway from three or four to two per year to reduce costs. The two shipments are limited by their capacity to transporting 84,000 gallons of fuel, but Midway’s power plant consumed approximately 127,000 gallons in fiscal year 2015, according to the Midway contractor’s annual management report. To make up the difference, agency officials we interviewed said that FWS borrowed fuel from the Coast Guard. The officials added that this arrangement is not likely to be a reliable source of fuel in the future because the fuel may be needed for Coast Guard operations. The primary method of transportation to Midway is by charter flights, which are expensive to operate, given Midway’s remote location. Before 2006, FWS was able to coordinate with cruise ships to visit Midway as an alternative form of transportation, but access to Midway through the marine national monument requires a permit. FWS officials we interviewed said that Midway has not been visited by a cruise ship since 2007 because the company found the regulations of the monument too onerous. As of July 2015, the round-trip charter flight from Honolulu to Midway was approximately $50,000. FWS covers the entire cost of the round-trip flight. If the flight is not full of passengers, the remaining space is used for cargo, such as mail, food, supplies, and parts. These charter flights are scheduled to travel from Honolulu to Midway approximately once every 2 weeks, or 26 flights per year, representing a reduction in the number of flights to Midway since the suspension of the public visitation program. For example, in fiscal year 2011, the FWS contractor for Midway reported 50 routine scheduled FWS charter flights or visitor flights to Midway. In total, including round-trip transportation from Honolulu and other costs on Midway, such as lodging, food, and other fees, a 2-week stay on Midway costs about $6,700 per person in 2015. The planning documents FWS needs to reestablish a public visitation program have not been updated since 2008 to reflect changes to the operating environment on Midway, creating uncertainty about the resources that may be needed to reestablish the program. In 2015, FWS officials estimated that $1.2 million in additional annual funding would be needed to operate and oversee a reestablished public visitation program and that additional funding would also be needed for start-up costs. However, this estimate relies on the availability of Charlie Barracks to house visitors, but as previously mentioned, the barracks is deteriorating. In addition, FWS officials told us that the populations of certain wildlife on Midway have grown and were not addressed in the most recent visitor services plan from December 2008. For example, because of the eradication of rats on Midway, there has been a significant increase in the population of the Bonin petrel—a seabird species that creates burrows in order to nest. On our site visit, we observed the extensive burrowing by Bonin petrels on unpaved areas and areas surrounding the buildings, which can create safety issues for visitors when the burrows collapse and disturb the seabirds. Officials were uncertain how much funding would be needed to update the plan but said that given current resource and personnel constraints, they have no plans to do so. The Bonin petrel is a small, burrow-nesting seabird that breeds primarily in the Northwestern Hawaiian Islands. Midway Atoll (Midway) hosts the world’s largest Bonin petrel population. Midway’s Bonin petrel population was estimated at over 500,000 in the late 1930s, but the accidental introduction of rats in 1943 caused numbers to plummet to fewer than 5,000 in the 1980s. The rats were eradicated in the 1990s, and the Bonin petrel population has rebounded with a current population estimated at close to 1 million. The petrels spend their days either in their burrows or at sea. Starting at sundown, they emerge from their burrows and congregate in the air above their nesting grounds by the hundreds of thousands. They feed on small fish and squid by dipping or surface-feeding on the ocean surface. Their life span is 15 years. Midway’s Henderson Airfield serves as an emergency landing airport for aircraft for the mid-Pacific Ocean region and facilitates access to the marine national monument that includes Midway. Under FAA regulations, FAA-certified air carriers cannot operate airplanes on routes outside the continental United States if they are more than a specified flying time from an adequate airport unless FAA approves their extended operations (ETOPS). To gain FAA approval, air carriers must, among other things, designate adequate airports, which are certified by FAA, for use in the event of a diversion during ETOPS. Henderson Airfield also serves as an emergency airport for military aircraft. Airline industry representatives and FAA officials we interviewed said that the availability of Henderson Airfield on Midway allows aircraft to fly more direct routes across the Pacific Ocean than would otherwise be possible without Midway. An FAA official said that other ETOPS airports are available in the Pacific region, including in Alaska, but commercial airlines may prefer to designate Midway as their ETOPS airport in their flight plans because it is a shorter flight than flying over Alaska. In addition, according to an airline pilots’ association representative, Midway’s availability as an ETOPS airport gives airlines more flexibility in developing complex flight plans based on their routes, weather and wind conditions, time, and fuel efficiency as well as ETOPS locations. If the airport were not available, an official of an association representing airlines said that airlines would need to alter their routes further north or south, resulting in longer routes and incurring additional flight time and fuel-related costs. Figure 8 shows the locations of Midway’s Henderson Airfield and other ETOPS airports in the North Pacific Region. Since 2003, seven emergency landings have occurred at the airport—four military and three civilian—generally caused by mechanical failures (see table 4). Figure 9 shows two U.S. Marine Corps F/A-18 Hornets that made an emergency landing in July 2015. Henderson Airfield also facilitates regular access for FWS and other federal agency staff, and those with approved permits to Midway and the rest of the Papahānaumokuākea Marine National Monument to conduct research and monitor geological events and endangered species. FWS, the primary user of the airport, regularly flies staff, volunteers, and supplies to support refuge mission activities. Other agencies use the airport as follows: The U.S. Coast Guard uses the airport to aid in its search-and-rescue and medical evacuation (medevac) operations for an area about 600 miles north of Midway that is used by numerous large vessels, according to Coast Guard officials. When the Coast Guard has to conduct a rescue in this area, it flies people with medical needs to Midway and then to Hawaii. From 2005 through 2015, there were 51 Coast Guard medevac flights that used Henderson Airfield. To support medevac flights and provide for other emergency needs, such as a hurricane evacuation, the Coast Guard stores its own fuel on Midway. NOAA has tidal monitoring stations on Midway and uses the atoll as a staging area to coordinate research activities throughout the marine national monument, including research to monitor endangered Hawaiian monk seals. For example, NOAA has flown Hawaiian monk seals from other areas in the Northwestern Hawaiian Islands to Midway, where they are acclimated and released. NOAA also uses Midway as an evacuation site. For example, according to NOAA officials, in 2015, it evacuated monk seal researchers from field camps on other islands in the marine monument to Midway because of dangerous weather conditions. According to a NOAA official, the agency “depends heavily” on the airport and has used FWS’s charter flights or chartered its own flights to transport people and supplies. Another NOAA official said it is good to have a runway within the marine national monument since ships are infrequent and it is difficult for ships to travel around the monument in winter. The endangered Hawaiian monk seal is one of the rarest marine mammals in the world and endemic to the Hawaiian archipelago. In 1976, they were listed as endangered under the Endangered Species Act. Approximately 1,200 seals are scattered throughout the entire archipelago. About 65 monk seals are usually present at Midway at any one time, and pupping levels have increased significantly since 1996. Primary factors affecting their recovery include predation by sharks, aggression by adult male monk seals, and reduction of habitat. Entanglement in marine debris, such as fishing nets and lines, and plastic rings are other sources of mortality. Their diet consists of reef fish, squid, octopus, and crustaceans, and their life span is 25 to 30 years. The U.S. Geological Survey conducts research, such as translocating endangered Laysan ducks from Laysan Island, in the Northwestern Hawaiian Islands, to Midway to establish a second wild population, largely because of its “rat-free” status and the logistic feasibility of restoring habitat and monitoring the ducks after release. With its unique history and natural resources, Midway has several designations, including as a national wildlife refuge and as the site of many historic properties. These designations result in competing priorities for FWS, including managing the ecosystem to protect wildlife as well as maintaining historic properties. FWS faces these competing priorities with an overall budget that has declined in recent years and with a property maintenance budget that is variable and small in relation to the maintenance work needed, some of which has been deferred. In this environment, FWS has maintained most of Midway’s historic properties as specified in a programmatic agreement that it entered in 1996 with the Navy and the Advisory Council on Historic Preservation during the Navy base closure and in its 2010 historic preservation plan. However, FWS has not consistently completed the section 106 process before deciding to demolish or take other actions with adverse effects on historic properties. Specifically, FWS did not provide public notification for two actions adversely affecting five historic properties and did not adequately notify the parties that had previously expressed interest in historic preservation issues on Midway about these or two other actions. Without providing public notification, including ensuring that key parties that have previously expressed interest in historic preservation issues on Midway are notified about future actions that may have an adverse effect on historic properties, FWS will not have reasonable assurance that it is adequately seeking public comment and input under the section 106 process. To fulfill the secretarial order’s directive that FWS manage Midway in accordance with the National Historic Preservation Act, the Secretary of the Interior should direct the Director of the U.S. Fish and Wildlife Service to ensure that the public, including key parties that have previously expressed interest in historic preservation issues on Midway, are notified about future FWS undertakings that may have an adverse effect on historic properties so that they have an opportunity to provide comment and input. We provided a draft of this report, for their review, to the Secretaries of Commerce, Homeland Security, the Interior, and Transportation; the Executive Director for the Advisory Council on Historic Preservation; and the Governor of Hawaii. In the letters conveying their comments and views, the Department of the Interior, the Advisory Council on Historic Preservation, and the State of Hawaii all generally agreed with the report’s findings and recommendation; these letters are included in appendixes V, VI, and VII, respectively. The Departments of Commerce, Homeland Security, and Transportation indicated that they had no formal comments on our report through e-mail communications provided through audit liaisons. The Department of the Interior agreed with our recommendation as drafted, which focused on notifying key parties that have previously expressed interest in historic preservation issues on Midway about future undertakings that may have an adverse effect on historic properties. The department said that, effective immediately, FWS would directly notify key interested parties about such future undertakings. The Advisory Council on Historic Preservation also agreed with our recommendation and suggested that, to better reflect the regulations implementing section 106 of the National Historic Preservation Act, we clarify in our recommendation that FWS ensure the public––and not just key parties previously expressing interest in historic preservation issues on Midway––are notified about future undertakings that have adverse effects on historic properties. We clarified our recommendation to indicate that FWS should ensure that the public, including key parties, are notified because, as we noted in our report, FWS did not provide public notification for two of the undertakings and did not notify key parties about those or two other undertakings. In addition, the council emphasized the importance of initiating the section 106 review process as early as possible in the planning process to allow adequate time for participation by the public and consulting parties and to ensure the review process is complete before expenditure of federal funds on an undertaking. In light of public interest in Midway’s historic properties and the possibility of restoring public visitation in the future, the council expressed hope that FWS would make historic preservation a higher priority among its competing funding priorities. The State of Hawaii agreed with our recommendation as drafted and noted the importance of Midway to the state as ecological habitat supporting the state’s resources and as a staging area supporting the state’s field camps at Kure Atoll. The State of Hawaii also expressed hope that Midway’s visitation program can be reinstated since it provides unique access to the Papahānaumokuākea Marine National Monument. In addition, the Department of the Interior, the Department of Commerce, and the Advisory Council on Historic Preservation provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Commerce, Defense, Homeland Security, the Interior, and Transportation; the Executive Director for the Advisory Council on Historic Preservation; the Governor of Hawaii; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. Our objectives were to (1) describe funding for operations and projects on Midway Atoll (Midway) for fiscal years 2009 through 2015; (2) examine how the U.S. Fish and Wildlife Service (FWS) has maintained historic properties on Midway and the extent to which it has consulted key stakeholders and sought public comment; (3) identify challenges, if any, FWS faces in reestablishing a public visitation program; and (4) describe the use of Henderson Airfield as an emergency landing airport and access point for the Papahānaumokuākea Marine National Monument. To describe funding for Midway’s operations and projects, we collected financial data for fiscal years 2009 through 2015 from FWS Region 1 for its financial allocations to Midway and from the Pacific Islands Refuges and Monuments Office in Honolulu for fees collected on Midway by the FWS contractor. An FWS official told us that because of the agency’s document retention policy, it does not have pre-2009 data. We assessed the reliability of these data and found them to be of undetermined reliability for purposes of our report. Specifically, we reviewed the annual external audit of the Department of the Interior’s financial database for fiscal year 2015, which noted internal control deficiencies related to financial reporting that could result in a misstatement within the financial statements. As a result, we relied on interviews with FWS budget and other officials to generally describe funding for Midway. In particular, we interviewed FWS budget and other officials on Midway and officials from the Pacific Islands Refuges and Monuments Office in Honolulu; the Region 1 office in Portland, Oregon; and FWS headquarters. We also interviewed officials with the Department of Transportation’s Federal Aviation Administration (FAA) regarding the agencies’ roles and responsibilities in providing an emergency landing site on Midway. To examine how FWS has maintained its historic properties, the extent to which it has consulted key stakeholders, and any challenges FWS faces in reestablishing a public visitation program, we collected documents and data to determine the universe of historic properties on Midway and actions that have been taken on these properties. We reviewed the National Historic Preservation Act, FWS’s Historic Preservation Plans for Midway, section 106 consultation documents, including regulations implementing section 106, and other relevant documents. We also interviewed the historian for Regions 1 and 8 and FWS officials located on Midway, in the Honolulu office, in the Region 1 office, and in headquarters. We interviewed officials in the Advisory Council on Historic Preservation, the Hawaii State Historic Preservation Office, the Historic Hawaii Foundation, and the Office of Hawaiian Affairs and native Hawaiian cultural practitioners for their perspectives on historic preservation on Midway. We also conducted a site visit to Midway in April 2015, during which we documented the presence and appearance of the 56 historic and other nonhistoric properties on Midway, including Sand and Eastern Islands. Properties included buildings, structures, monuments, ruins, and other items found on Midway. Using a data collection instrument to guide our work, we took photographs and videos of properties, in addition to noting the general condition of those properties. We identified properties by building number listed on FWS’s real property asset list, maps, or FWS photos or directly by observing numbers found on properties. We also consulted with the FWS officials on Midway frequently to confirm the presence and identity of properties. We did not physically measure or test any features or components of any property on Midway to determine its condition. Any statements we made as a result of our site visit on the condition of a property were based solely on the general appearance of the structure and do not constitute an actual physical assessment of the property. Moreover, our observations were limited at times because of several factors. For example, FWS directed us to avoid all direct contact with or disturbing of wildlife, including staying 150 feet away from endangered species whenever possible. Midway is a seabird nesting colony, and our site visit coincided with albatross breeding season. Other seabirds were present in great numbers on Midway, such as the Bonin petrel, a burrowing seabird. In addition, FWS directed us not to remove any object from or disturb any properties on Midway. FWS officials also directed us to avoid those properties having hazardous materials, such as lead paint, asbestos, and black mold. Further, FWS officials directed us to avoid those having unstable features, such as a collapsed portion of a structure, risk of falling debris, and partial or complete structural instability. To identify challenges, if any, FWS faces in reestablishing a public visitation program, we collected documents, plans, and data to determine any FWS obligations to have a public visitation program, what prior visitation program Midway had, and challenges that FWS faces in reestablishing a visitation program. During our site visit to Midway, we observed and photographed properties that have been used for or are potentially relevant to a visitation program. We also interviewed FWS officials on Midway, in the Honolulu office, and in the Region 1 office about public visitation. To describe the use of Henderson Airfield, we reviewed FWS and FAA reports, policies, and regulations on the operation of Henderson Airfield and extended operations airports and interviewed FWS and FAA officials regarding the agencies’ roles and responsibilities in providing an emergency landing site on Midway. We also interviewed a nonprobability sample of airline industry representatives —of a leading aircraft manufacturer, the largest airline pilots association, and the largest organization representing commercial airlines—regarding the use of Midway as an emergency landing site. The interviews allowed us to gain a perspective of airline industry views on Midway, but because this was a nonprobability sample, it does not allow us to generalize these views to either a segment of or the entire airline industry. We also reviewed information related to flight and passenger data from FWS’s contractor on Midway and FAA data on the emergency military and civilian landings on Midway. FAA only had data on emergency military and civilian landings available for the years 2003 through 2015, so we can only report on the number of emergency landings for this time period. We conducted this performance audit from March 2015 to June 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix lists a chronology of historical and other events on Midway Atoll (Midway) based on our review of documents from the U.S. Fish and Wildlife Service (FWS), the Department of the Navy, the U.S. Marine Corps, and the Papahānaumokuākea Marine National Monument; legal documents, including laws and executive orders; and other historical resources. Pre-1850s. The first visitors to the islands that constitute Midway Atoll (Midway) may have been Polynesians/Hawaiians exploring the Pacific in voyaging canoes. No physical evidence of such visits remains, but oral histories and chants refer to distant low-lying islands with abundant birds and turtles. Native Hawaiians named the atoll “Pihemanu,” which means “the loud din of birds.” July 5, 1859. The first westerner, Captain N. C. Brooks of the Hawaiian ship Gambia, discovers Midway. He claims Midway for the United States. Captain Brooks names the atoll Middlebrooks, combining his name and Midway’s position between the west coast of the United States and Japan. July 1867. The Pacific Mail Steamship Company, a transpacific commercial trading company, attempts to establish a coal storage depot and constructs two wooden houses on Sand Island, the larger of the two main islands in the atoll. August 28, 1867. Captain William Reynolds of the steamer USS Lackawanna formally takes possession of Midway for the United States by order of the Secretary of the Navy. Midway’s location is considered important to provide a safe refuge for ships traveling between the United States and Asia in case of disaster and to establish a supply depot for provisions and water as well as coal to fuel the ocean steamships, according to an 1869 report submitted to the Senate Committee on Naval Affairs. January 28, 1869. The Navy adopts the name Midway Islands, according to a report submitted to the Senate Committee on Naval Affairs. January 20, 1903. Executive Order 199-A places Midway Islands “under the jurisdiction and control of the Navy Department.” April 20, 1903. About 30 people from the Commercial Pacific Cable Company arrive on Midway to begin constructing the cable station. They first erect temporary houses and then construct five permanent station buildings using steel beam supports and reinforced concrete, which is considered an innovative use of the modern material. The buildings provide an office for the cable operator; a mess and recreation hall; and quarters for the staff, servants, and a superintendent. The basements are used for support functions, such as storing provisions and housing the laundry and machine shop. June 1903. The Navy ejects Japanese squatters and poachers who kill seabirds for feathers that are popular for women’s hats and appoints the Commercial Pacific Cable Company as Midway’s custodians. The first short-tailed albatross was observed on Midway between 1936 and 1941. The short-tail population dropped dramatically because of feather hunters in the late 19th century who sold feathers for women’s hats. Over 5 million adult birds were hunted and killed. Protections for the short-tailed albatross date back to 1970, and populations in the United States were listed as endangered under the Endangered Species Act in 2000. The world population of short-tailed albatross is currently estimated at 2,200 birds. They feed on flying fish eggs, shrimp, squid, and crustaceans, primarily during daybreak and twilight hours, and have been known to forage as far as 1,988 miles from their breeding grounds. June 18, 1903. The ships C.S. Anglia and C.S. Colonia complete installing the cable between Guam and Midway. July 4, 1903. The cable, which stretches from San Francisco to Honolulu to Midway to Guam to the Philippines, carries the first round-the-world message and wishes “a happy Independence Day to the United States, its territories and properties.” The message takes 9 minutes to be received. May 1904. About 20 U.S. Marines arrive to secure Midway as a U.S. possession and protect the cable staff and albatross from poachers. September 22, 1905. The U.S. Lighthouse Service illuminates the first lighthouse on the atoll. 1906. A cemetery—commonly referred to as the Doctors’ Cemetery because four of the six individuals buried there are medical doctors—is established on Midway when James Miller, an assistant surgeon in the Navy, dies of appendicitis. 1917. The U.S. Weather Bureau establishes a station on Midway. April 12, 1935. Pan American Airways sets up an air base for weekly Trans-Pacific Flying Clipper Seaplane service and constructs a hotel on Sand Island. Midway becomes a regular fuel stop on a trans-Pacific route, including Honolulu, Wake Island, Guam, and Manila. November 22-29, 1935. Pan American Airways’ China Clipper makes the first trans-Pacific airmail flight from San Francisco to Honolulu, Midway, Wake, Guam, and Manila. 1938. The U.S. Army Corps of Engineers dredges an entrance channel through the southern reef between Eastern and Sand Islands. It also constructs a harbor and seaplane runways in the lagoon as a civil works project. April 25, 1939. Public Law No. 76-43 authorizes the Navy to establish, develop, or increase naval aviation facilities on Midway. March 1940. Construction of a naval air station begins. Private contractors start constructing land runways on Eastern Island and other infrastructure on Sand Island in preparation for possible hostilities. 1940. The Navy contracts with Albert Kahn of Detroit to prepare standardized plans for barracks, mess halls, and hangars for various bases. He also provides plans for the officers’ housing, shops, storage buildings, and theater on Midway. Kahn is considered to be one of the country’s foremost industrial designers and known for his use of steel, reinforced concrete, and natural light to create comfortable and functional interior spaces. 1941. The Commercial Pacific Cable Company’s last superintendent on Midway begins his tenure. He remains on Midway during World War II, operating the cable for the Navy. August 1, 1941. A Naval Air Station, Midway Islands is established on Eastern Island. December 7, 1941. Japanese destroyers, known as the Midway Neutralization Unit, shell Midway. Four people are killed and 10 are wounded during the shelling. On December 7, 1941, Midway Atoll (Midway) was bombarded by Japanese ships, in concert with the Japanese strike on Pearl Harbor. Midway’s property number 354, the command/ communications and power plant building was penetrated by a 5-inch artillery shell. First Lieutenant George H. Cannon was commanding one of Midway’s gun batteries at the time. Mortally wounded by enemy fire, he refused to leave his post and refused medical attention, even though he had a crushed pelvis, until he was assured communications were restored to his command post. By the time he received medical attention, it was too late, and he died. According to the Medal of Honor citation, Cannon earned the nation’s highest military award for “distinguished conduct in the line of his profession, extraordinary courage, and disregard of his own condition.” June 4-6, 1942. Early on June 4, aircraft from four Japanese aircraft carriers, which had attacked Pearl Harbor 6 months earlier, attack and severely damage the base on Midway. After their initial attacks, the Japanese aircraft head back to their carriers to rearm and refuel, and while the aircraft are returning, the Japanese navy is surprised by U.S. naval forces in the area. Aircraft from the USS Enterprise, USS Hornet, and USS Yorktown attack the Japanese fleet. Three Japanese carriers are hit, set ablaze, and abandoned. A fourth Japanese carrier, the Hiryu, responds with two waves of attacks—both times bombing the USS Yorktown, leaving her severely damaged but still afloat. That afternoon, a USS Yorktown scout plane locates the Hiryu, and the USS Enterprise sends dive bombers to attack. The attack leaves the Hiryu burning and without the ability to launch aircraft. Over the next 2 days, the U.S. Navy forces the Japanese to abandon the battle and retreat to Japan. The Japanese lose approximately 4,800 men, four carriers, one cruiser, and hundreds of aircraft, while the United States loses about 307 men, one carrier, one destroyer, and over 100 aircraft. The Battle of Midway is considered the decisive battle of the war in the Pacific. After Midway, the Americans and their allies took the offensive in the Pacific arena. July 15, 1942. The submarine base at Midway is commissioned and operates until the end of World War II. August 1944. After the Battle of Midway, Sand Island is developed as an airfield, and it accommodates all operations of large planes from Eastern Island. The airfield becomes an important stopover for aircraft transiting to the war zone as it pushes further east. 1945. Air activity on Eastern Island withers as the Navy shifts it to Sand Island, and the Navy abandons Eastern Island by the end of the year. 1947. Pan American Airways discontinues its operations on Midway. In September, the Civil Aeronautics Authority takes over the maintenance and operation of airport facilities at Midway, Wake Island, and Guam, and the facilities become part of the federal airways and links in the air routes over the Pacific. May 1, 1950. The Civil Aeronautics Authority ceases airport operations on Midway because of the Navy’s decision to withdraw from the island. June 6, 1950. The Navy decommissions the naval air station on Midway. September 1950. The Navy recommissions the naval air station on Midway to support the Korean conflict. Ships and planes transporting thousands of military personnel stop at Midway for refueling and emergency repairs. 1951. The Federal Communications Commission issues an order authorizing permanent discontinuance of all operations of the Commercial Pacific Cable Company’s route between San Francisco and Manila. December 31, 1952. The Commercial Pacific Cable Company turns over all its buildings and equipment to the Navy and ceases operations on Midway. April 1953. The Navy deactivates the naval air station on Midway as hostilities in Korea decrease. July 1953. The Navy reactivates the naval air station on Midway in reaction to Soviet bombers flying across the Pacific, sparking the era of “Cold War” hostilities. To protect the United States and keep track of the Soviet planes, construction begins on the Distant Early Warning Line—a network of radar picket ships to give a distant early warning of aircraft or missile attack on North America. 1957. A $40 million construction program begins as Midway becomes a home for the Pacific Airborne Early Warning portion of the Distant Early Warning Line, known as the Pacific Barrier. Navy construction units (Seabees) complete an 8,000-foot runway for the heavy aircraft landing on Midway and build an aircraft hangar large enough to hold six aircraft. During this construction, the Hawaiian Dredging Company completes new housing, reconditions the station theater, and builds a new chapel in a modern “A” frame design. July 1958. The Pacific Barrier becomes fully operational, and Midway gains renewed importance as a staging point for airborne radar early warning patrols by Navy WV-2 (EC-121) Warning Star aircraft—called Willy Victors—flown by the Airborne Early Warning Barrier Squadron, Pacific. These patrols between Midway and the Aleutian Islands are designed to provide warning of attack on North America by Soviet bombers. March 18, 1959. The Hawaiian Statehood Act is passed and, on August 21, 1959, Hawaii becomes the 50th state. The law excludes Midway from the state of Hawaii’s territory. September 4, 1962. Executive Order 11048 makes the Secretary of the Navy responsible for the civil administration of Midway and vests all executive and legislative authority necessary for that administration, and certain judicial authority, in the Secretary. 1968. Midway is one of the main aircraft and ship refueling stations during the Vietnam War. It also accommodates classified missions and the storage and assembly of advanced underwater weapons and the Sound Surveillance System (Project Caesar), which includes miles of undersea cables with hydrophones to pick up the sounds of submarines. June 8, 1969. The United States and South Vietnam conduct secret meetings in the Midway House (the Officer-in-Charge House, property number 414). During this meeting, the United States announces the “Vietnamization” of the war and a U.S. troop withdrawal of 25,000 men. October 1978. The Navy downgrades the naval air station to a naval air facility, and dependents are withdrawn. 1981. A base operating services contract is awarded to civilian contractors to operate the naval air facility, further reducing the number of military personnel there. November 23, 1985. Pan American B747 “China Clipper II” visits Midway to commemorate the 50th anniversary of the first China Clipper flight. 1986. The National Park Service initiates a study of Midway’s heritage resources to determine if any of the World War II-era properties are eligible for designation as a National Historic Landmark. The study identifies nine eligible defensive structures on Sand Island and none on Eastern Island. May 1987. Six ammunition magazines, a pillbox (a defensive structure built on or near the beach), and two gun emplacements on the west side of Sand Island are, as a group, designated a National Historic Landmark and placed on the National Register of Historic Places. Green turtles are the largest of all the hard- shelled sea turtles. They were listed as threatened under the Endangered Species Act in 1978, except for the Florida and Mexican Pacific coast populations, which were listed as endangered. The 1978 green turtle listing was replaced in April 2016 with listings of distinct population segments as endangered or threatened; the green turtles on Midway Atoll (Midway) are part of the Central North Pacific distinct population segment, which was listed as threatened. A major factor contributing to the green turtle’s decline worldwide is the commercial overharvest of turtles and eggs for human consumption. The number living and foraging within Midway’s lagoon is currently undetermined, but many of Midway’s turtles have been tagged to monitor the population. Adults migrate from foraging grounds throughout the Hawaiian Islands to breeding grounds. Their life span is unknown, but sexual maturity occurs anywhere from 20 to 50 years. April 22, 1988. A cooperative agreement between the Navy and FWS establishes an overlay national wildlife refuge on Midway. Under the agreement, the Navy makes the atoll’s land and water areas available to the Department of the Interior to administer for the conservation and management of migratory birds, endangered species, and other fish and wildlife. The wildlife on the island includes the endangered Hawaiian monk seal, the threatened green turtle, and diverse marine species and migratory seabirds and shorebirds, including the world’s largest population of nesting Laysan albatross. July 1, 1993. The naval air facility on Midway is recommended for closure under the Defense Base Closure and Realignment Act of 1990 (Pub. L. No. 101-510, Tit. XXIX). 1993 and 1994. The Navy conducts cultural resources surveys to identify buildings, structures, objects, and sites on both Sand and Eastern Islands that might be eligible for inclusion in the National Register of Historic Places. The Navy determines that 78 properties are eligible, including 9 properties that were designated as a National Historic Landmark. September 30, 1993. The naval air facility on Midway is operationally closed. February 5, 1996. FWS, the Navy, and the Advisory Council on Historic Preservation enter into a programmatic agreement, as authorized by the regulations implementing section 106 of the National Historic Preservation Act, regarding historic preservation issues on Midway. The agreement addresses the transfer of historic properties identified on Midway in 1996 and how FWS was to treat the properties afterward. August 2, 1996. FWS enters into a cooperative agreement with the Midway Phoenix Corporation for support of a public visitation program. August 1996. Midway opens to public visitation. October 31, 1996. Executive Order 13022 transfers jurisdiction and control of Midway from the Navy to the Department of the Interior and directs the Secretary of the Interior, through FWS, to administer Midway as the Midway Atoll National Wildlife Refuge. April 3, 1997. In a ceremony transferring Midway to FWS, the Secretary of the Navy presents the “key to Midway” (in the shape of a Laysan albatross) to the Department of the Interior and remarks that Americans are “trading guns for goonies.” June 30, 1997. The last Navy personnel stationed on Midway depart. 1997. The first systematic marine invertebrate survey is conducted and documents 316 invertebrate species, 250 of which had not been previously recorded at Midway. 1998. FWS and the Oceanic Society sponsor the first two Elderhostel historic preservation projects. Working under the supervision of a historic preservation specialist, volunteers clean and preserve the 3-inch anti- aircraft gun on Eastern Island, clean and stabilize Battery C, and remove paint from the 5-inch guns in the memorial park. FWS funds roof and soffit repairs on eight officers’ quarters and the Officer-in-Charge house. FWS receives a National Park Service grant for $6,000 to develop a plan for restoring the Armco huts, power plant, and cable station. June 1999. FWS issues the Midway Atoll National Wildlife Refuge Historic Preservation Plan 1999, which defines a program to integrate historic preservation planning with the refuge’s wildlife conservation mission. 1999. FWS and the Oceanic Society sponsor three Elderhostel historic preservation projects. Work includes restoring the theater windows and completing a condition assessment, cleaning and stabilizing Battery A, preserving the 5-inch guns, completing a condition assessment of the cable station, inventorying changes to the buildings, drafting new architectural floor plans, and organizing a library of historic resources. 2000-2001. FWS receives a Save America’s Treasures grant for $308,681 from the National Park Service. The grant provides funds for termite prevention of the officers’ housing, Officer-in-Charge house, theater, and several shop buildings; re-roofing of a cable station building (property number 643; mess hall); and restoration of an ARMCO hut. September 13, 2000. In response to a mandate in the fiscal year 2000 appropriations act, the Secretary of the Interior signs Secretarial Order 3217 designating the Midway Atoll National Wildlife Refuge as the Battle of Midway National Memorial “so that the heroic courage and sacrifice of those who fought against overwhelming odds to win an incredible victory will never be forgotten.” December 4, 2000. Executive Order 13178 establishes the Northwestern Hawaiian Islands Coral Reef Ecosystem Reserve. The reserve encircles the Northwestern Hawaiian Islands, except for Midway; however, it directs the Secretary of the Interior to follow the order’s management principles in managing the Midway Atoll National Wildlife Refuge to the extent consistent with applicable laws. January 7, 2002. The fiscal year 2000 appropriations act requires the Secretary of the Interior to consult on a regular basis with organizations with an interest in Midway, including the International Midway Memorial Foundation, on the management of the national memorial. The Secretary of the Interior establishes the Battle of Midway National Memorial Advisory Committee to develop a strategy for a public dedication of the memorial, identify and plan for appropriate exhibits to commemorate this important event, and offer recommendations on improving visitor services. March 6, 2002. The Midway Phoenix Corporation and FWS enter into a settlement agreement to terminate their cooperative agreement. May 1, 2002. The last Midway Phoenix Corporation employees depart Midway. February 2003. As much as 100,000 gallons of JP-5 jet fuel spills from an underground corroded pipeline at the Midway fuel farm. Officials from the Coast Guard, FWS, GeoEngineers, Inc., and Pacific Environmental Corporation collaborate to oversee the cleanup project. February 26, 2003. H.R. 924 is introduced in the House of Representatives, which, if enacted, would require the Secretary of the Interior to designate an agency within the department to replace FWS as administrator of Midway. Congress does not pass H.R. 924. May 7, 2003. FWS awards a contract to Chugach McKinley, Inc., to provide operations and maintenance services at Midway Atoll National Wildlife Refuge. July 3, 2003. A military C-130 makes an emergency landing because an engine is out. Jan. 6, 2004. A civilian Boeing 777 makes an emergency landing because of left engine issues. The world’s most endangered duck, the Laysan duck is considered the rarest native waterfowl in the United States. They once were widespread across the Hawaiian Islands, but by 1860, they ceased to exist anywhere except Laysan Island. In 1967, they were listed as endangered. In 2004, 20 endangered Laysan ducks were transported 750 miles to Midway Atoll (Midway) from Laysan Island. Biologists have established a second “insurance” population of this endemic duck on Midway. Today, the population numbers over 400 ducks. Laysan ducks are primarily insect feeders but may also feed on leaves and seeds, and their life span is about 12 years. 2004. FWS transports 20 endangered Laysan ducks to Midway from their home at Laysan Island in the Hawaiian Islands National Wildlife Refuge. The birds adapt well to the seeps created on Sand Island and surprised biologists by breeding during their first year, with 12 ducklings successfully fledging. An additional 22 ducks are transported to Midway in 2005, most of which are introduced to Eastern Island. By the end of 2006, more than 100 Laysan ducks are living on Midway. May 26, 2005. An oversight hearing on public access within the national wildlife refuge system is held before the Subcommittee on Fisheries and Oceans, House Committee on Resources. Witnesses include the Chairman of the International Midway Memorial Foundation, who requests that the committee consider designating an agency other that FWS to manage Midway. June 15, 2006. Proclamation 8031 designates the Northwestern Hawaiian Islands Marine National Monument. The monument is one of the largest fully protected marine managed areas in the world. The Meaning of Papahānaumokuākea The name Papahānaumokuākea (pronounced Pa-pa-hah-now-mo-koo-ah-keh-ah) comes from an ancient Hawaiian tradition concerning the formation of the Hawaiian Islands. Papahānaumoku is a mother figure personified by the earth, and Wākea is a father figure personified in the expansive sky. According to tradition, their union resulted in the creation of the entire Hawaiian archipelago. The components of the name of the monument—”Papa” (earth mother), “hānau” (birth), “moku” (small island or large land division), and “ākea” (wide)—bespeak a fertile woman giving birth to a wide stretch of islands beneath a benevolent sky. February 28, 2007. Proclamation 8031 is amended by Proclamation 8112 to give the Northwestern Hawaiian Islands Marine National Monument the Hawaiian name Papahānaumokuākea Marine National Monument. March 1, 2007. The First Lady visits Midway in recognition of the newly designated Papahānaumokuākea Marine National Monument and to increase public awareness of its exceptional marine ecosystem. On March 2, 2007, in a ceremony in Honolulu, accompanied by the Governor of Hawaii and native Hawaiian elders, she announces the new native Hawaiian name of the marine monument. 2008. FWS contracts for a condition assessment of the cable station. Because of their deteriorated condition, a decision was made to salvage and dismantle three of the four two-story buildings and save one. FWS contracts to salvage the windows, doors, and other fixtures of the cable station. July 8, 2009. A military F-18 conducts an emergency landing because an engine is out. 2009. FWS’s Cultural Resources Team travels to Midway with the National Oceanic and Atmospheric Administration to record the terrestrial elements associated with the Battle of Midway for the American Battlefield Grant. Consultation is completed for the cable station and a memorandum of agreement is signed with stipulations that mitigate for the loss of three buildings. Engineering and historic preservation firms assess the condition of the seaplane hangar and present the results in two different studies. They begin the process of developing appropriate plans and costs for rehabilitating the seaplane hangar. December 2010. FWS revises its Midway Atoll National Wildlife Refuge Historic Preservation Plan 1999 and reissues it in December 2010. American Recovery and Reinvestment Act funding is used to rehabilitate officers’ housing and for solar water heaters. July 30, 2010. Delegates to the United Nations’ Educational Scientific and Cultural Organizations 34th World Heritage Convention agree to inscribe Papahānaumokuākea Marine National Monument as one of 28 mixed (natural and cultural) World Heritage Sites. March 10, 2011. A tsunami hits Midway, which is caused by a 9.0 earthquake in Japan. The tsunami covers about 60 percent of Eastern Island and 20 percent of Sand Island. There are no human casualties, but the boat piers and old seawalls are damaged. More than 110,000 Laysan and black-footed albatross chicks—about 22 percent of the year’s albatross production—and at least 2,000 adults/subadults are lost. Thousands of dead reef fish wash up on Eastern Island, and hundreds or potentially several thousand adult/subadult Bonin petrels are buried alive and die. 2011. Plans and costs to rehabilitate/repair the seaplane hangar are finalized and contract bids are reviewed. The project is halted because of the high cost. June 16, 2011. A Boeing 747-400 makes an emergency landing because of a cracked windshield. Aug. 2, 2012. A military F-18 makes an emergency landing because of an in-flight emergency. November 14, 2012. Midway’s Public Visitation Program is suspended. July 10, 2014. A Boeing 777 en route to Guam with 348 passengers makes an unscheduled landing because of smoke in the cockpit. November 20, 2014. An oversight hearing titled “Is the Midway Atoll National Wildlife Refuge Being Properly Managed?” is held before the Subcommittee on Fisheries, Wildlife, Oceans, and Insular Affairs, House Committee on Natural Resources. July 14, 2015. Two U.S. Marine Corps F/A-18 Hornets, one with a cabin pressure malfunction, make an emergency landing on Midway. Since 1996, the U.S. Fish and Wildlife Service (FWS) provided a variety of opportunities for the public to visit Midway Atoll (Midway). Given Midway’s remote location in the Pacific Ocean, providing public access to the wildlife refuge has been challenging. According to FWS officials we interviewed, almost 20,000 people visited Midway from 1996 through 2012. This appendix provides additional information about three phases of public visitation to Midway from 1996 to 2002, from 2003 to 2006, and from 2007 to 2012. FWS entered into a cooperative agreement with the Midway Phoenix Corporation in August 1996, amended in November 1997, to support a public use program at Midway. Under the amended cooperative agreement, FWS was responsible for establishing and enforcing national wildlife refuge policies, rules, and regulations and providing staff and expertise to assist in implementing and overseeing the public use program. The Midway Phoenix Corporation, under the amended cooperative agreement, was responsible for implementing and supporting the public use program. The cooperative agreement also established that the Midway Phoenix Corporation would provide the funding, staffing, supplies, equipment, logistics, and services to accomplish its responsibilities under the agreement. Under the agreement, the Midway Phoenix Corporation retained revenue derived from the goods and services it offered on Midway, including lodging and recreational activities, such as boat and fishing trips. The agreement also required the Midway Phoenix Corporation to provide the principal funding to develop, implement, and maintain a compatible public use program on the refuge and to contribute $200,000 per year to support FWS’s responsibilities under the agreement because FWS would not be able to meet its responsibilities without those funds. Under the cooperative agreement, the Midway Phoenix Corporation completed several capital improvement projects and coordinated air transportation to and from Midway. Capital improvement projects included the refurbishment of two barracks for overnight lodging, construction of a new restaurant and bar (see fig. 10), and the installation of a cell phone tower. According to the former executives of the Midway Phoenix Corporation, their initial capital investments on Midway totaled $15 million. Under the terms of the cooperative agreement, all newly constructed property was property of the United States and the Midway Phoenix Corporation did not have any claims to improvements made to the government property. The Midway Phoenix Corporation, using the Henderson Airfield runway, also coordinated air transportation from Honolulu using Phoenix Air and Aloha Airlines. According to the former Midway Phoenix Corporation executives we interviewed, air travel to Midway was subsidized by the company. The Midway Phoenix Corporation hired contractors to facilitate recreational activities and coordinated tours to Midway with several tour providers. Specifically, recreational activities were supported by contractors that operated catch and release sport fishing and scuba diving excursions (see fig. 11). Tour providers organized packages for visitors to Midway around those and other recreational activities, such as historic interpretation. In 1999, cruise ships also began to transport visitors to Midway, although their visits were typically for less than a day. FWS and the Midway Phoenix Corporation entered into a settlement agreement in March 2002 that terminated their cooperative agreement and a 2001 fuel delivery contract. As a result, FWS limited access to Midway in January 2002. From 2002 to 2006, FWS did not operate a regularly scheduled public visitation program on Midway. Although FWS allowed the public to visit Midway during that time, the agency did not coordinate commercial or charter flights for the public as was the case under the prior public use program operated by the cooperator. As a result, visitors to Midway during this time arrived primarily by cruise ship, with a few additional visitors who arrived via private sailboats and aircraft. Visitors were required to obtain permission to travel to Midway from the refuge manager to ensure staff availability. In order to accommodate visitors, FWS charged cruise ship visitors a refuge access fee, and other costs were paid by the cruise lines to bring interpretative staff to Midway. FWS also supported commemorative events, such as the 62nd anniversary of the Battle of Midway in 2004 where most visitors arrived by cruise ship. After the designation of the marine national monument in 2006, FWS reestablished a public visitation program to Midway beginning in 2007. Midway was established as a special management area and is the only location within the marine national monument that can be used for public visitation and recreation. However, this public visitation program operated differently from the previous iteration in that any activity that took place within the marine national monument was subject to the approval of the Monument Management Board. The Monument Management Board comprises representatives from the three agencies designated as co- trustees to manage the marine national monument—the state of Hawaii, the Department of the Interior, and the Department of Commerce. Under this public visitation program, tour providers were responsible for obtaining monument permits through the application process. Those permit applications were evaluated and approved by the co-trustees. Transportation to Midway under this public visitation program was primarily by charter flights from Honolulu. Tour providers coordinated with FWS to arrange for transportation to Midway on regularly scheduled charter flights. FWS supported recreational activities and interpretative tours and established a Midway visitor’s center and museum (see fig. 12). However, unlike the previous public visitation program operated from 1996 to 2002, sport fishing was prohibited and FWS did not facilitate scuba diving. FWS supported another commemorative event for the 65th anniversary of the Battle of Midway in 2007. This public visitation program was implemented by FWS until November 2012, when public visitation to Midway was suspended because of resource constraints. See table 5 for a summary of public visitation on Midway since 1996. We visited Midway Atoll (Midway) from April 7, 2015, to April 21, 2015, to document the physical appearance of historic and other properties on Midway, including on Sand and Eastern Islands. This appendix presents selected photographs of properties that are National Historic Landmarks, are eligible for inclusion on the National Register of Historic Places, or have the potential to be used for public visitation. It also presents photographs of other properties that were identified as important during our interviews with U.S. Fish and Wildlife Service (FWS) officials and stakeholders, including the former executives of the Midway Phoenix Corporation and the International Midway Memorial Foundation. In total, 65 properties met our selection criteria and are included in this appendix. In 2009, FWS conducted an on-site review of Midway properties to determine their condition. Each property’s condition was rated on the following scale: excellent, good, fair, poor, or failed. We did not independently assess the condition or the FWS rating during our review. Of the 65 properties included in this appendix, 35 properties were determined by FWS to be in fair condition; 12 properties, in poor condition; 8 properties, in fair to poor condition; 5 properties, in failed condition; and 1 property, in fair to good condition. For 2 properties, the condition was unknown. In addition, 2 of the properties were condemned, meaning that the properties are no longer safe to enter. None of the properties included in this appendix were assessed to be in excellent or good condition. The appendix is generally organized by the approximate date of construction, with the oldest properties first and newer constructed properties last. Each property is identified by a number, based on the Navy facility number or a number assigned during a study of cultural resources on Midway. For each property, FWS provided information regarding its use status as of 2015. (See figs. 13 through 70.) To view these photographs online, please click on this hyperlink. In addition to the contact named above, Jeff Malcolm (Assistant Director), Carolyn Blocker, Patricia Donahue, Amanda Goolden, Cynthia Norris, Carl Potenzieri, Dan Royer, Jerry Sandau, Ilga Semeiks, and Jeanette Soares made key contributions to this report. Melanie Papasian Fallow, Doug Manor, Ernest Powell Jr., and Timothy Walker made key contributions to the multimedia for this report.
Midway, a trio of islands about 1,300 miles from Honolulu, has been managed by FWS as a wildlife refuge since the closure of a naval base in 1996. Midway also serves as a national memorial to a historic World War II battle. GAO was asked to review FWS's management of Midway. This report examines (1) funding for operations and projects on Midway for fiscal years 2009 to 2015, (2) how FWS maintained historic properties on Midway and the extent to which it consulted with key parties and sought public comment, (3) challenges FWS faces in reestablishing a public visitation program, and (4) the use of Midway's Henderson Airfield. GAO visited Midway in April 2015 to observe the condition of historic and other properties. GAO reviewed budget data for Midway from fiscal years 2009 through 2015; reviewed laws, policies, and regulations on historic preservation; examined public visitation plans, emergency landing data, and the use of the airfield; and interviewed FWS, FAA, and Advisory Council on Historic Preservation officials and other stakeholders. According to officials in the Department of the Interior's U.S. Fish and Wildlife Service (FWS), operations funding for Midway Atoll (Midway) has decreased in recent years and project-specific funding has varied. Specifically, budget officials said that FWS, after increasing the funding allocated to Midway's operations to more than $4 million by fiscal year 2011, decreased Midway's allocation by more than $1 million by fiscal years 2012 and 2013. These officials said that the lower allocation led to suspension of public visitation on Midway in November 2012, which, in turn, decreased operations funding available from fees collected for services such as lodging. Midway has also received funding for specific projects, such as lead-based paint abatement. In addition, under an interagency agreement, the Federal Aviation Administration (FAA) has reimbursed FWS up to $3 million per year for the direct costs of operating Midway's Henderson Airfield and provided additional funds for capital improvement projects, such as resurfacing runways. FWS has maintained most historic properties on Midway but has demolished others without providing for public notice and involvement, which is not consistent with the regulations implementing section 106 of the National Historic Preservation Act. A 2000 order by the Secretary of the Interior directs FWS to administer Midway in accordance with the law. Federal agencies must provide the public with notice of and opportunity to comment on agency actions that may affect historic properties. Since 2012, FWS has demolished seven historic properties on Midway as part of the agency's removal of lead-based paint and taken another action adversely affecting historic properties without providing adequate public notification, including directly notifying parties that have previously expressed interest in historic preservation issues on Midway, and an opportunity for public comment. An FWS official said that the extent of such notification may vary based on the size of the actions. However, officials with the Advisory Council on Historic Preservation said that groups known to have a high level of interest should be notified directly. Without providing public notification, including ensuring that such parties are notified about future actions that may adversely affect historic properties, FWS will not have reasonable assurance that it is adequately seeking public comment and input under the section 106 process. Since the suspension of public visitation to Midway in 2012, FWS faces multiple challenges relating to personnel, infrastructure, supplies, and transportation access to reestablish the program. For example, the building used to house visitors is almost beyond the point of repair and renovation, according to a 2009 assessment of its condition. FWS officials estimated that $1.2 million in annual funding would be needed to reestablish a public visitation program and that additional funding would also be needed for start-up costs. Midway's Henderson Airfield serves as an emergency landing airport for aircraft in the mid-Pacific Ocean region and facilitates access to Midway and its surroundings. Under FAA regulations, air carriers must designate in their flight plans a certified airport for use in the event of an emergency during extended operations. Since 2003, there have been seven military and civilian emergency landings on Midway. GAO recommends that FWS ensure that the public, including previously interested key parties, are notified about FWS actions on Midway that may have an adverse effect on historic properties. The Department of the Interior agreed with GAO's recommendation. View a video of GAO's review of FWS's management of Midway. To view high-resolution photographs from this report, please see GAO's Flickr page .
In the event of a disaster, such as an influenza pandemic, states may request federal assistance to maintain essential services pursuant to the Robert T. Stafford Disaster Relief and Emergency Assistance Act (Stafford Act) of 1974. The Stafford Act primarily establishes the programs and processes for the federal government to provide disaster assistance to state and local governments and tribal nations, individuals, and qualified private nonprofit organizations. Federal assistance may include technical assistance, the provision of goods and services, and financial assistance. The Federal Emergency Management Agency (FEMA), which is part of DHS, is responsible for carrying out the functions and authorities of the Stafford Act. For Stafford Act incidents, upon the recommendation of the Secretary of Homeland Security and the FEMA Administrator, the President may appoint a Federal Coordinating Officer (FCO) to manage and coordinate federal resource support activities provided pursuant to the Stafford Act. DHS has recently updated the National Response Plan, now called the National Response Framework (NRF). To assist in planning and coordinating efforts to respond to an influenza pandemic, in December 2006, the Secretary of Homeland Security predesignated a national Principal Federal Official (PFO) and FCO for influenza pandemic, and established five federal influenza pandemic regions each with a regional PFO and FCO. This structure was formalized in the NRF. The PFO facilitates federal support to establish incident management and assistance activities for prevention, preparedness, response, and recovery efforts while the FCO manages and coordinates federal resource support activities provided pursuant to the Stafford Act. The PFO is to provide a primary point of contact and situational awareness for the Secretary of Homeland Security. In addition, according to an official in HHS’ Office of the Assistant Secretary for Preparedness and Response (ASPR), HHS has also predesignated a national Senior Federal Official (SFO) and a regional SFO for influenza pandemic in each of the five federal influenza pandemic regions who serve as ambassadors for public health to states, territories, and the District of Columbia, which aligns with the PFO and FCO structure. The federal influenza pandemic regions, each of which consists of two standard federal regions, are shown below. In addition, under the Public Health Service Act, the Secretary of Health and Human Services has the authority to declare a public health emergency and to take actions necessary to respond to that emergency consistent with his/her authorities. These actions may include making grants, entering into contracts, and conducting and supporting investigations into the cause, treatment, or prevention of the disease or disorder that caused the emergency. According to the National Pandemic Implementation Plan, as the lead agency responsible for public health and medical care, HHS would lead efforts during an influenza pandemic while DHS would be responsible for overall nonmedical support such as domestic incident management and federal coordination. In December 2006, Congress passed the Pandemic and All-Hazards Preparedness Act (PAHPA) which codifies preparedness and response federal leadership roles and responsibilities for public health and medical emergencies by designating the Secretary of Health and Human Services as the lead federal official for public health and medical preparedness and response. The act also prescribes several new preparedness responsibilities for HHS. Among these, the Secretary must develop and disseminate criteria for an effective state plan for responding to an influenza pandemic. Additionally, the Secretary is required to develop and require the application of evidence-based benchmarks and objective standards that measure the levels of preparedness for public health emergencies in consultation with state, local, and tribal officials and private entities, as appropriate. Application of these benchmarks and standards is required of entities receiving funds under HHS public health emergency preparedness grant and cooperative agreement programs. Beginning in fiscal year 2009, the Secretary of Health and Human Services is to withhold certain amounts of funding under these grant and cooperative agreement programs where a state has failed to develop an influenza pandemic plan that is consistent with the criteria established by HHS or where an entity has failed to meet the benchmarks or standards established. In addition to the federal pandemic funds provided for states and localities by Congress in fiscal year 2006, HHS and DHS receive funds for public health and emergency management grant programs that can be used by states and localities to continue to support influenza pandemic efforts. In fiscal year 2006, Congress appropriated $5.62 billion in supplemental funding to HHS for, among other things, (1) monitoring disease spread to support rapid response, (2) developing vaccines and vaccine production capacity, (3) stockpiling antivirals and other countermeasures, (4) upgrading state and local capacity, and (5) upgrading laboratories and research at CDC. As shown in figure 2, a total of $770 million, or about 14 percent, of this supplemental funding went to states and localities for preparedness activities. Of the $770 million, $600 million was specifically provided by Congress for state and local planning and exercising while the remaining $170 million was allocated for state antiviral purchases. According to HHS, as of May 2008, states had purchased $21.9 million of treatment courses of influenza antivirals for their state stockpiles. In addition to these state stockpiles of antivirals, HHS has also acquired antivirals that are in the HHS-managed Strategic National Stockpile, which is a national repository of medical supplies that is designed to supplement and resupply local public health agencies in the event of a public health emergency. In addition to the federal pandemic funds specifically provided by Congress, which are administered for HHS by CDC, HHS officials said that states and localities could use funds provided under two other HHS public health emergency preparedness cooperative agreement programs to continue to support their influenza pandemic activities. The Public Health Emergency Preparedness Program (PHEP), which is a cooperative agreement administered by CDC, is intended to improve state and local public health security capabilities. Specifically, the Cities Readiness Initiative, a component of PHEP, is intended to ensure that major cities and metropolitan areas are prepared to distribute medicine and medical supplies during a large-scale public health emergency. The Hospital Preparedness Program, which is administered by HHS ASPR, is intended to improve surge capacity and enhance community and hospital preparedness for public health emergencies. DHS officials also said that states and localities could use funds provided under three of the Homeland Security Grant Program grants, which are administered by DHS’s Office of Grants and Training, to continue to support influenza pandemic activities. The State Homeland Security Grant Program’s purpose includes supporting, building, and sustaining capabilities at the state and local levels through planning, equipment, training, and exercise activities. The Metropolitan Medical Response System Program is intended to support an integrated, systematic mass casualty incident preparedness program that enables an effective response during the first crucial hours of an incident such as an epidemic outbreak, natural disaster, and a large- scale hazardous materials incident. The Urban Area Security Initiative Grant Program is intended to address the unique planning, equipment, training, and exercise needs of high- threat, high-density urban areas. All of the five states and 10 localities we reviewed, both urban and rural, had developed influenza pandemic plans. As directed by the federal pandemic funding guidance, all 50 states and localities that received direct funding through the PHEP and Hospital Preparedness Program were required to plan and exercise for an influenza pandemic. According to CDC officials, all 50 states have developed an influenza pandemic plan. Of the $600 million designated by Congress for states and localities for planning and exercising, CDC divided the funding into three phases. Recipients included 50 states, five territories, three Freely Associated States of the Pacific, three localities, and the District of Columbia. CDC awarded $100 million for Phase I in March 2006, $250 million for Phase II in two disbursements—July 2006 and March 2008—and $250 million for Phase III in two disbursements—September 2007 and October 2007. Phase III is to be completed in 2008 and will be the final phase for dedicated federal pandemic funds to states and localities that received direct federal funding. For Phase I, recipients were expected to comply with the following requirements, among others: establish a committee or consortium at the state and local levels with which the recipient is engaged that represents all relevant stakeholders in the jurisdiction, such as public health, emergency response, business, community-based, and faith-based sectors; implement a planning framework for influenza pandemic preparedness and response activities to support public health and medical efforts; collaborate among public health and medical preparedness, influenza, infectious disease, and immunization programs and state and local emergency management to maximize the effect of funds and efforts; coordinate activities between state and local jurisdictions, tribes, and military installations; among local agencies; with hospitals and major health care facilities; and with adjacent states; conduct exercises to test the plans of states or localities that receive the funding directly and prepare an after-action report, which is a summary of lessons learned highlighting necessary corrective actions; assess gaps in pandemic preparedness using CDC’s self-assessment tool to evaluate the jurisdiction’s current state of preparedness; submit a proposed approach to filling the identified gaps; and provide an associated budget for the critical tasks necessary to address those gaps. According to CDC officials, all entities that received direct federal funding have met the requirements for Phase I of the federal pandemic funds. For Phase II, recipients were expected to comply with the following four priority activities, among others: development of a jurisdictional work plan to address gaps identified by the CDC self-assessment process in Phase I; development of and exercise an antiviral drug distribution plan; development of a pandemic exercise program that includes medical surge, mass prophylaxis, and nonpharmacological public health interventions and a community containment plan with emphasis on closing schools and discouragement of large public gatherings at a minimum; and submission of an influenza pandemic operational plan to CDC. According to HHS, CDC has reviewed whether recipients met the requirements identified in the Phase II guidance. In addition, recipients were asked to document the process used to engage Indian tribal governments in Phases I and II and to develop and implement an influenza pandemic preparedness exercise program involving community partners to exercise their capabilities and prepare an after- action report highlighting necessary corrective actions. Unlike Phase I in which there is no mention of DHS’s Homeland Security Exercise and Evaluation Program (HSEEP), in Phase II CDC encouraged, but did not require, recipients to use HSEEP for disaster planning and exercising efforts. HSEEP guidance defines seven different types of exercises, each of which is either discussions-based or operations-based. Discussions- based exercises are a starting point in the building block approach of escalating exercise complexity. These types of exercises typically highlight existing plans, policies, interagency and interjurisdictional agreements, and procedures and focus on strategic, policy-oriented issues. An example of a discussions-based exercise is a tabletop exercise that can be used to assess plans, policies, and procedures or to assess the systems needed to guide the prevention of, response to, and recovery from a defined incident. Operations-based exercises are characterized by an actual reaction to simulated intelligence; response to emergency conditions; mobilization of apparatus, resources, and networks; and commitment of personnel, usually over an extended period. These exercises are used to validate the plans, policies, agreements, and procedures assessed in discussions-based exercises. An example of an operations-based exercise is a full-scale exercise, which is a multiagency, multijurisdictional, multiorganizational exercise that validates many facets of preparedness. CDC’s federal pandemic funding guidance for Phase I and II did not explicitly specify the type of exercises to be conducted; the exception was the mass prophylaxis exercise for Phase II, which was required to be an operations-based exercise. In order to be compliant with HSEEP protocols, there are four distinct performance requirements. They include (1) conducting an annual training and exercise plan workshop and developing and maintaining a multiyear training and exercise plan, (2) planning and conducting exercises in accordance with the guidelines set forth by HSEEP, (3) developing and submitting an after-action report, and (4) tracking and implementing corrective actions identified in the after-action report. For Phase II, the National Governors Association conducted a series of nine influenza pandemic regional workshops for states between April 2007 and January 2008 to enhance intergovernmental and interstate coordination. In a February 2008 issue brief, the National Governors Association reported its results from five regional influenza pandemic preparedness workshops involving 27 states and territories conducted between April and August 2007. The workshops were designed to identify gaps in state influenza pandemic preparedness—specifically in non-health- related areas such as continuity of government, maintenance of essential services, and coordination with the private sector, and to examine strengths and weaknesses of coordination activities among various levels of government. The workshops also included a discussions-based exercise focused on regional issues. For Phase III, recipients were asked to describe ongoing influenza pandemic–related priority projects that would improve exercising and response capabilities specifically for an influenza pandemic. Phase III required recipients to fill planning gaps identified in Phase I and II. In addition, recipients were expected to comply with the following requirements, among others: submit workplans that included specific influenza pandemic planning, implementation, and evaluation of activities; update the existing influenza pandemic operational plan based on CDC’s assessment on six priority thematic areas, by January 2008; create an exercise strategy and schedule; and utilize the tools developed by DHS’s HSEEP to create planning, training, and exercise evaluation programs, which includes an after-action report, improvement plan, and corrective action program for each seminar, tabletop, functional, or full-scale exercise conducted. Over the past several years, states have made progress in developing pandemic plans. In 2006, CDC reported that most states did not have complete influenza pandemic plans addressing areas such as enhancing surveillance and laboratory capacity, managing vaccines and antivirals, and implementing community containment measures to reduce influenza transmission. However, all 50 states, territories, and the District of Columbia now have influenza pandemic plans according to CDC officials. Trust for America’s Health, a health advocacy nonprofit organization, reported that the type of publicly available influenza pandemic plan varied from a comprehensive influenza pandemic plan to free-standing annexes to emergency management plans, to mere summaries of a state’s influenza pandemic plan. At the time of our review, all five states we reviewed had influenza pandemic plans that focused on leadership, surveillance and laboratory testing, vaccine and antiviral distribution, and communications. Some state plans included sections on education and training, and infection control. Two of the three localities that received the federal pandemic funds in our study addressed similar types of topics, such as disease surveillance and laboratory testing, health care planning, vaccine and antiviral distribution, mental health response, and communications in their influenza pandemic plans. Most of the remaining urban and rural localities also primarily addressed similar topics. In planning for an influenza pandemic, officials from three of the five states and two of the three localities that received the federal pandemic funds told us that they interacted with HHS and CDC in planning for a pandemic. However, federal officials did not reach out to states and localities when the National Pandemic Implementation Plan was being developed and the PFOs for influenza pandemic had limited interaction with the selected states and localities. At the time of our site visits, officials from three of the five states and two of the three localities that received direct federal funding reported interacting with HHS and CDC in planning for an influenza pandemic to clarify funding requirements and expectations. CDC officials in the Coordinating Office for Terrorism Preparedness and Emergency Response also told us that they reviewed reports from the states and local government recipients on how they had met the federal pandemic funding requirements. CDC then provided feedback to the states and localities on how well they were meeting the requirements. In addition, CDC officials told us that they provided technical assistance when requested. While the federal government has provided some support to states in their planning efforts, states and localities have had little involvement in national planning for an influenza pandemic. The National Pandemic Implementation Plan lays out a series of actions and defines responsibilities for those actions. The National Pandemic Implementation Plan includes 324 action items, 17 of which call for states and local governments to lead national and subnational efforts, and 64 in which their involvement is needed. In our August 2007 report, we highlighted that key stakeholders such as state and local governments were not directly involved in developing the action items in the National Pandemic Implementation Plan and the performance measures that are to assess progress, even though the National Pandemic Implementation Plan relies on these stakeholders’ efforts. Stakeholder involvement during the planning process is important to ensure that the federal government’s and nonfederal entities’ responsibilities and resource requirements are clearly understood and agreed upon. Moreover, HHS ASPR officials confirmed that the National Pandemic Implementation Plan was developed by the federal government without any state input. Officials from all of the states and localities reviewed told us that they were not directly involved in developing the National Pandemic Implementation Plan. Officials from all five of the states and seven of the localities were aware of the National Pandemic Implementation Plan. Officials from Taylor County (Florida), Peoria County (Illinois) and Washington County (New York) had not seen the National Pandemic Implementation Plan. State officials from Florida, New York, and Texas, and officials from two localities in California and one locality in New York reported that they used its action items for their own planning efforts. In addition, states and localities reported limited interaction with the predesignated federal PFOs and FCOs in coordinating influenza pandemic efforts. According to the national PFO for influenza pandemic, the PFOs for influenza pandemic had limited interaction with state governments for influenza pandemic efforts because it was unclear whether the PFO structure for an influenza pandemic would remain in the National Response Framework until it was issued in January 2008, and finalized in March 2008. The Secretary of Homeland Security sent letters in December 2006 and in March 2008 to state Governors on the PFO structure, and the PFO structure was discussed at the HHS- and DHS-led workshops in the five federal pandemic regions. At the time of our site visits, we found that only state officials in California and New York were aware of these federally predesignated officials. In addition, in its issue brief on the five state influenza pandemic workshops, the National Governors Association reported that the presence of the PFOs for influenza pandemic at two of their workshops was the first opportunity for most states to interact with these officials. In every state and locality reviewed, officials told us that they involved other state and local agencies within their jurisdiction in accordance with federal pandemic funding requirements. Health and emergency management officials at some of the states and localities reviewed said they collaborated with each other to develop the influenza pandemic plan for public health response as required by the federal pandemic funds and the influenza pandemic annex for emergency response where applicable. For example, the Miami-Dade County Health Department (Florida) collaborated with the Miami-Dade County Pandemic Influenza Workgroup, which included stakeholders such as the Miami-Dade County Department of Emergency Management and Homeland Security, CDC Miami Quarantine Station, Medical Examiner Department, and the Miami-Dade Corrections and Rehabilitation Department to develop its influenza plan. This plan is also used as an annex to the Miami-Dade County Department of Emergency Management and Homeland Security’s Comprehensive Emergency Management Plan. In some cases, both the health and emergency management departments at the state and local levels developed separate influenza pandemic plans to address health and emergency response efforts respectively, while in other cases the emergency management departments used the health department’s influenza pandemic plan as an annex to their emergency operations plans. In addition to developing their own influenza pandemic plans, state public health agencies in all the states reviewed assisted their local counterparts with their influenza pandemic plans. For example, officials from the Florida Department of Health said they used a standardized assessment tool to assess county influenza pandemic plans on 36 elements such as surveillance, response and containment, and community-based control and mitigation interventions. The tool also included a section on strengths and areas for improvement for each element. Further, New York State Department of Health officials said that they reviewed all of the county- level influenza pandemic plans and provided feedback. We also found that in some cases, localities consulted other localities’ influenza pandemic plans to help them to develop their own plans. For example, officials from Stanislaus County Health Services Agency (California), Miami-Dade County Health Department (Florida), and Dallas County Health and Human Services (Texas) said they reviewed King County’s (Washington) influenza pandemic plan to help them develop their own plans. Officials at all 15 of the states and localities reviewed also said they assisted other state and local agencies within their jurisdiction in their influenza pandemic efforts by reviewing each other’s plans or sharing information. For example, New York State Department of Health officials said that as the lead agency responsible for influenza pandemic planning efforts, they participated in and coordinated meetings with other state agencies such as the Unified Court System and Department of Correctional Services to discuss areas such as infection control and community containment, visitation policies during an influenza pandemic, management of sick inmates, emergency staffing plans, and employee education and training. Officials from 6 of the 15 states and localities we reviewed reported that they had tribal nations within their jurisdictions. Of these 6, only officials from California, Florida, New York state, and Miami told us that they had included tribal nations in their influenza planning efforts, as required by the federal pandemic funds. For example, officials from the New York State Department of Health said they provided guidance to the Mohawk and Seneca tribes in developing influenza pandemic plans. Tribal nation representatives also had access to the state’s health provider network and were invited to influenza pandemic training sessions and monthly influenza pandemic conference calls. Officials from Texas and Taylor County (Florida) reported that they did not include tribal nations in their influenza planning efforts. Texas Department of State Health Services officials reported that there are three tribes within the state with which the respective counties are coordinating. In Taylor County (Florida), officials reported that they had not yet involved their local tribe, the Miccosukee tribe, in their influenza pandemic planning efforts. Officials from all five states and four localities also reported that they provided guidance or technical assistance for continuity planning efforts to nonprofit organizations, and officials from all five states and seven localities told us that they provided the same assistance to the private sector. States and localities that received direct federal pandemic funding are required to involve nonprofit organizations and the private sector in planning for an influenza pandemic. For example, Peoria City/County Health Department (Illinois) officials told us that in addition to contracting with the Red Cross in providing bulk food distribution services during an influenza pandemic, they had initial discussions on how to implement isolation and quarantine. Officials from the New York City Department of Health and Mental Hygiene (New York) stated that they partnered with the New York City Department of Small Business Services and conducted six focus groups with approximately 60 participants from nonprofit and for- profit organizations to provide general information related to influenza pandemic, and to discuss the continuity strategies from CDC’s Business Pandemic Influenza Planning Checklist and feasibility in adopting them. While all five selected states and seven localities have coordinated with the private sector for influenza pandemic planning, several officials from state agencies in Florida and Illinois, and local agencies in Los Angeles County (California), Chicago (Illinois), and Dallas County (Texas) have focused specifically on critical infrastructure sectors, such as transportation (highway and motor carriers), food and agriculture, water, energy (electricity), and telecommunications (communications). Officials from the Dallas County Department of Health and Human Services (Texas) said that they assisted a local power company and a grocery chain on continuity of operations planning for an influenza pandemic. The National Governors Association reported in its February 2008 issue brief that few states from its five regional workshops had defined the roles and responsibilities of private sector entities. Moreover, potential shortages of critical goods and services—specifically, food, electricity, and transportation capacity—were cited as key areas of concern across all five National Governors Association-led workshops. While Idaho, Minnesota, Montana, North Dakota, South Dakota, and Utah were less concerned about the food supply due to longstanding practices of stockpiling against severe weather and other threats, other participating states were concerned that they did not have agreements in place with the private sector food distribution and retail systems. Since we visited these states and localities, HHS provided feedback to the states in November 2007 on whether their influenza pandemic plans addressed certain priority areas, such as fatality management, and found that there were major gaps nationally in the plans in these priority areas. In response to an action item in the National Pandemic Implementation Plan, HHS led a multidepartment effort to review pertinent parts of states’ influenza pandemic plans in 22 priority areas along with other federal agencies such as the Departments of Agriculture, Commerce, Education, Homeland Security, Justice, Labor, and State under the auspices of the Homeland Security Council. For example, DHS was responsible for reviewing the priority area of how states worked with the private sector to ensure critical essential services. States were required to submit parts of their plans that addressed the priority areas to CDC by March 2007. The participating departments reviewed the pertinent parts of the plans and HHS compiled the results into individual draft interim assessments, which included the status of planning for each entity and how they measured against the national average for the priority areas, and provided this feedback to the states. As shown in table 1, on average, states had major gaps in all areas, with a ranking of “many major gaps” in 16 of the 22 priority areas and “a few major gaps” in the remaining 6 priority areas, as defined by HHS. An official in HHS ASPR told us that generally, the states fared better in the public health priority areas such as mass vaccination and antiviral drug distribution plans than in other areas such as school closures and sustaining critical infrastructure. As we will discuss in more detail later in the report, we found that the areas in which state and local officials were looking for additional federal guidance were often the same areas that were rated by HHS as having “many major gaps” in planning. Every state received individual comments from CDC on the strengths and weaknesses of their influenza pandemic plans in six priority areas. According to HHS officials in ASPR, states also received feedback in some of the remaining priority areas. In addition, states received general comments from the Departments of Agriculture, Commerce, Labor, Homeland Security, and Justice. The Departments of Commerce, Labor, and Homeland Security noted that many state influenza pandemic plans did not address the effect of social distancing in private workplaces or state agencies. Nor did they address issues related to loss of jobs and income for workers, particularly for those needing to stay home to care for children dismissed from school or to care for themselves or ill relatives. Further, they concluded that many states needed to develop occupational safety and health plans that dealt with infection control and other influenza pandemic issues such as worker behavioral and mental health concerns. HHS, DHS, and other federal agencies issued guidance to states in March 2008 to assist them in updating their current influenza pandemic plans. These updated plans are due in July 2008. HHS will provide feedback to them on the strengths and weaknesses of their plans as they did for the previous review of plans. Disaster planning, including for an influenza pandemic, needs to be tested and refined with a rigorous and robust exercise program to expose weaknesses in planning and allow planners to address the weaknesses. Exercises—particularly for the type and magnitude of emergency incidents such as a severe influenza pandemic for which there is little actual experience—are essential for developing skills and identifying what works well and what needs further improvement. The first phase of the federal pandemic funds required states and localities that received this funding to test their influenza pandemic plan. CDC officials stated that their expectation was that the recipients would conduct a gap analysis using CDC’s self-assessment tool to identify objectives to exercise to improve their plans and then exercise the identified vulnerabilities of their plans, rather than testing their entire plan. According to CDC officials, all states and localities that received this funding have met the requirement to conduct a discussions-based or operations-based exercise to test their influenza pandemic plans and to prepare an after-action report. The second phase of funding required states and localities that receive the funding directly to conduct an exercise that would test an antiviral drug distribution plan and to develop an influenza pandemic exercise schedule that included medical surge, mass prophylaxis, and nonpharmaceutical public health interventions such as closing schools and discouragement of large public gatherings. As noted earlier, HHS stated that CDC has reviewed whether recipients met the requirements identified in the Phase II guidance. All of the states and localities except for two of the localities in our review had conducted at least one influenza pandemic exercise to test their influenza pandemic planning. The two localities that had not conducted their own exercise had participated in discussions-based exercises in other jurisdictions. Among the states and localities that had conducted an exercise, one state and two localities conducted at least one discussions- based and an operations-based exercise, one state and one locality conducted at least one operations-based exercise, and the remaining three states and five localities conducted at least one discussions-based influenza pandemic exercise. For example, the Stanislaus County Health Services Agency (California) conducted an influenza pandemic discussions-based exercise and the New York City Department of Health and Mental Hygiene (New York) conducted both influenza pandemic discussions-based exercises and operations-based exercises. In addition, state agencies in New York, Texas, and Illinois conducted or funded regional influenza pandemic exercises that included multiple jurisdictions within each state. For example, the Peoria City/County Health Department (Illinois) participated in an influenza pandemic discussions-based exercise with nine other counties. According to the National Governors Association, the states’ influenza pandemic exercises have been almost exclusively discussions-based exercises and few have held regional or multistate exercises. In addition, health departments conducted influenza pandemic exercises at all but one of the states and localities that had conducted at least one influenza pandemic exercise. In all but one of the states and localities reviewed, emergency management officials had either conducted or participated in an influenza pandemic exercise. Officials of all states and localities reviewed reported they had incorporated lessons learned from exercises into their influenza pandemic planning. Officials told us that the changes made as a result of an exercise included buying additional medical equipment and providing training. For example, officials at the New York City Department of Health and Mental Hygiene (New York) informed us that an influenza pandemic exercise resulted in identifying a potential shortage of ventilators. In response, they purchased 70 ventilators that were relatively easy to train staff to use, which were being used by selected hospitals. Other influenza pandemic exercises resulted in providing additional training. For example, Stanislaus County Health Services Agency (California) officials identified the need for their staff to be trained in the National Incident Management System (NIMS), which is a consistent nationwide approach to enable all government, private-sector, and nongovernmental organizations to work together to prepare for, respond to, and recover from domestic incidents. All county staff have been subsequently trained in NIMS. Furthermore, state and local officials stated that influenza pandemic exercises led to modifying policies or influenza pandemic plans. Officials at the Illinois Department of Public Health realized during an exercise that a judge’s ruling would be needed to quarantine an individual with a suspected contagious disease. As a result, the department sought and obtained amendments to its department’s authority that if voluntary compliance cannot be obtained, then the department can quarantine an individual with a suspected contagious disease for 2 days before a judge’s ruling is necessary. In addition, officials at the Dallas County Department of Health and Human Services (Texas) reported that they identified the need for, and subsequently developed, an appendix to their influenza pandemic plan on school closures during a pandemic that included factors for schools to consider in deciding when to close schools and for how long. HHS (including CDC), DHS and other federal agencies have provided a variety of influenza pandemic information and guidance for states and local governments through Web sites and state and regional meetings. HHS and CDC have disseminated pandemic preparedness checklists for workplaces, individuals and families, schools, health care, and community organizations, with one geared for state and local governments. HHS and CDC have also provided additional influenza pandemic guidance including Interim Pre-pandemic Planning Guidance: Community Strategy for Pandemic Influenza Mitigation in the United States (February 2007). CDC and other federal agencies are currently considering the Interim Guidance for the Use of Intervals, Triggers, and Actions in Pandemic Influenza Planning that was developed by HHS and CDC and provides a framework and thresholds for implementing student dismissal and school closure. HHS also issued Interim Public Health Guidance for the Use of Facemasks and Respirators in Non-Occupational Community Settings during an Influenza Pandemic, and funded Providing Mass Medical Care with Scarce Resources: A Community Planning Guide (November 2006). CDC officials stated that the journal CHEST published four papers on providing mass critical care with scarce resources for all-hazards in May 2008. In addition, HHS funded guidance on exercising for an influenza pandemic, including discussions-based exercises for influenza pandemic preparedness for local public health agencies. Furthermore, the federal planning guidance for states to update their influenza pandemic plans provided by HHS, DHS, and other federal agencies includes references to federal guidance that pertains to the topics on which the states’ plans will be assessed. The guidance includes preparedness and planning advice and information on specific tasks and capabilities that the states’ plans should contain for each of the priority areas for which the states will be assessed. The guidance contains information on several of the priority areas that state and local officials were looking for additional guidance on and that were rated as having “many major gaps” in planning in the first assessment, such as fatality management and community containment. However, while the guidance document states what the states’ plans should contain for each of the topics, it does not include how to implement these tasks and capabilities. HHS and DHS, in coordination with other federal agencies, have also developed draft guidance on how to allocate limited supplies of vaccines, including target groups for individuals, and are working on similar guidance for antivirals. They are also working on guidance on the prophylactic use of antivirals (administering antivirals to individuals who had not shown symptoms). However, HHS and DHS officials acknowledged that the federal government has not provided guidance on some of the influenza pandemic-specific topics that state and local officials had told us that they would like guidance on from the federal government, such as ethical decision making and liability and legal issues. There are also two federal Web sites that contain influenza pandemic information. The purpose of the Web site www.pandemicflu.gov is to be one-stop access to U.S. government avian and pandemic flu information. The site includes guidance and information on state and local planning and response activities, such as all state influenza pandemic plans. The Web site www.llis.dhs.gov is a national network of lessons learned and best practices for emergency response providers and homeland security officials and contains information on many different topic areas, such as cyber security and wildland fires. Lessons Learned Information Sharing System (LLIS) officials stated that the best practices are vetted by working groups of subject matter experts. LLIS has an influenza pandemic topic area that includes news, upcoming events, plans and guidance, after-action reports, and best practices. An LLIS representative also informed us that there is an influenza pandemic forum that acts as a message board for LLIS users to discuss topics, which have included how to implement teleworking during an influenza pandemic. In addition, there is an influenza pandemic channel on the Web site that has a document and resource library and a message board, including topics such as antiviral and vaccine planning. HHS officials stated that CDC and LLIS have created a secure channel for state and local health departments to post and share influenza pandemic exercise information. According to an LLIS representative, the secure channel contains the influenza pandemic exercise schedules for states and localities that receive the funding directly and there are plans to include after-action reports from the exercises on the Web site. There are also several nonfederal Web sites that contain influenza pandemic practices on particular topics. The Center for Infectious Disease Research and Policy at the University of Minnesota has collected and peer- reviewed influenza pandemic “promising practices” that can be adapted or adopted by public health stakeholders. Their Web site (http://www.pandemicpractices.org/practices/list.do?topic-id=13) has practices on three themes: models for care (surge capacity, standards of care, triage strategies, out-of-hospital care, collaborations), communications (risk communications, community engagement, and resiliency), and mitigation (nonpharmaceutical interventions). In addition, National Public Health Information Coalition officials said that they are planning to post influenza pandemic communications on their Web site. CDC officials also stated that CDC has a cooperative agreement with the Association of State and Territorial Health Officials and the National Association of County and City Health Officials to provide influenza pandemic best practices and tools that states and localities can download from their respective Web sites. In addition to providing guidance, HHS has also convened state influenza pandemic planning summits and funded regional state influenza pandemic workshops. To help coordinate influenza pandemic planning, HHS and other federal agencies, including DHS, held “State Pandemic Planning Summits” with the public health and emergency response community in all states in 2005 and 2006. As part of the summits, the Secretary of Health and Human Services signed memorandums of understanding (MOU) with each state that identified shared common goals and shared and independent responsibilities between HHS and the individual state for influenza pandemic planning and preparedness. For example, the MOU between HHS and the state of California noted that states and local communities are responsible under their own authorities for responding to an influenza pandemic outbreak within their jurisdictions and having comprehensive influenza pandemic preparedness plans and measures in place to protect their citizens. In addition, to further assist states and localities with their influenza pandemic preparedness efforts, HHS funded the National Governors Association to conduct a series of influenza pandemic regional workshops for states, the first five of which are discussed earlier. A National Governors Association official stated the association held nine workshops between April 2007 and January 2008 and that it is not planning to conduct additional influenza pandemic workshops for states. In addition, in May 2008, FEMA hosted an influenza pandemic exercise and seminar for senior executives. The purpose of the exercise, which involved FEMA officials, the Pandemic Region A PFO team, and a number of states in Pandemic Region A, was to determine best practices for communication and coordination during an influenza pandemic response. The senior executive seminar, which included officials from CDC, HHS, DHS, and a number of states in Pandemic Region C, was intended to address pandemic risk, challenges, and issues, both regionally and nationally. FEMA is also planning to host another influenza pandemic seminar in May 2008 for the other states in Pandemic Region C that did not participate in the previous seminar. Despite these efforts, state and local officials from all of the states and localities we interviewed told us that they would like additional federal influenza pandemic guidance from the federal government on specific topics to help them to better plan and exercise for an influenza pandemic. Although, as discussed earlier, there is federal guidance for some of these topics, the existing guidance may not have reached state and local officials or may not address the particular concerns or circumstances of the state and local officials we interviewed. Three of the areas on which state and local officials reported that they wanted federal influenza pandemic guidance were rated as having “many major gaps” nationally among states’ influenza pandemic plans in the first HHS-led review of their influenza pandemic plans. These areas were (1) implementing the community interventions, such as closing schools, discussed in the Interim Pre-pandemic Planning Guidance: Community Strategy for Pandemic Influenza Mitigation in the United States (which is called community containment in the federal priority topics), (2) fatality management, and (3) facilitating medical surge. Two other areas that state and local officials told us that they would like additional federal influenza pandemic guidance on, mass vaccination and antiviral drug distribution, were also rated as having “a few major gaps” nationally. State and local officials also told us that they would like the federal government to provide guidance on additional topics: ethical decision making, prophylactic use of antivirals, Strategic National Stockpile utilization, liability and legal issues, and personal protective equipment. While officials from some state and local governments were looking for guidance from the federal government, others were developing the information on their own. For example, while California Department of Health officials stated that they were developing standards and guidelines for health care professionals to use in any medical surge (including an influenza pandemic), which has since been released, Peoria City/County Health Department (Illinois) officials told us that they wanted guidance on how to deal with medical surge. In addition, the Texas Department of State Health Services developed an antiviral prioritization plan, while Illinois Department of Public Health officials said they would like the federal government to provide guidance on antiviral prioritization. Two recent reports found similar concerns among state and local officials. In its February 2008 issue brief, the National Governors Association reported that states were grappling with many of the same issues that we found: community containment (school closures), antiviral prioritization, prophylactic use of antivirals, and legal issues. Similarly, an October 2007 Kansas City Auditor’s Office report on influenza pandemic preparedness in the city noted that Kansas City Health Department officials would like the federal government to provide additional guidance on some of the same issues we found: clarifying community interventions such as school closings and the criteria that will trigger these measures, antiviral and vaccine prioritization, and the type of personal protective equipment to use (e.g., type of face mask). According to the National Pandemic Implementation Plan, it is essential for states and localities to have plans in place that support the full spectrum of societal needs over the course of an influenza pandemic and for the federal government to provide clear guidance on the manner in which these needs can be met. As discussed earlier, the HHS-led assessment of the states’ pandemic plans was in response to an action item in the National Pandemic Implementation Plan that states that HHS, in coordination with DHS, shall review and approve states’ influenza pandemic plans. The assessment found “many major gaps” in 16 of the 22 priority areas in the states’ pandemic plans. HHS and DHS, in coordination with the Homeland Security Council, Office of Personnel Management, and the Departments of Agriculture, Commerce, Defense, Education, Homeland Security, Justice, Labor, State, Transportation, the Treasury, and Veteran Affairs, led a series of five workshops for states in the five influenza pandemic regions shown in figure 1 in January 2008. Prior to the meetings, HHS ASPR officials told us that the workshops would be an opportunity for states to request additional influenza pandemic guidance from the federal government. We observed two of the five workshops, and received summaries from HHS of all five workshops. The discussions at the workshops mainly focused on the draft guidance and evaluation criteria for the second round of assessing the state pandemic plans, but the participants also raised concerns and requested guidance. Some of the common high-level themes discussed at some of these workshops included a need for more involvement from federal agencies in communicating with state counterparts. The March 2008 planning guidance included a list of contacts and phone numbers in federal agencies for the state officials to help them to communicate with their federal counterparts as they update their pandemic plans. Participants also requested guidance on various topics. Among the five workshops conducted, state officials in three of the workshops sought guidance on how to handle school closures and ports of entry issues while state officials in two of the workshops wanted to know how to plan with CDC quarantine stations. In addition, in three of the workshops, state officials discussed wanting more critical infrastructure information or guidance. For example, state officials discussed that there are challenges for state health departments to work with the critical infrastructure sectors because they have no authority to influence their participation in influenza pandemic planning. However, there was not an opportunity to explore these issues in greater depth during the meetings. A senior DHS official in the Office of Health Affairs reported that there are no plans to conduct further regional state workshops on influenza pandemic. HHS, DHS, and the Department of Labor hosted three Web seminars that provided an overview of the March 2008 planning guidance and included time for discussion. In addition, according to HHS, state-specific assistance has been provided through conference calls. Additional meetings of states by federal influenza pandemic region, led by HHS and DHS, and in coordination with other relevant federal agencies, could be held and their purpose broadened to provide a forum for state and federal officials to address the identified gaps in states’ planning. The federal agencies that were the lead departments for rating priority areas in the states’ influenza pandemic plans could provide additional corresponding information and guidance on their respective priority areas to the states on their common challenges. Federal agencies could provide assistance to the states on the priority areas that they rated as having “many major gaps” in planning nationally. For example, the Department of Justice could provide assistance on the coordination of law enforcement, the Department of Agriculture could provide assistance on the operational status of state-inspected slaughter and food processing establishments, and the Department of Education on the policy process for school closures and communication. With plans due in July 2008 for a second round of review, states’ plans may still have major gaps that could be addressed by federal and state governments working together to address these challenges. The meetings could also provide a forum for states to build networks with one another and federal officials. In our October 2007 report related to critical infrastructure protection challenges that require federal and private sector coordination for an influenza pandemic, we found that for influenza pandemic efforts, DHS has used critical infrastructure coordinating councils primarily to share influenza pandemic information across sectors and government levels rather than to address many of the identified challenges. Thus, we recommended that DHS lead efforts to encourage the councils to consider and address the range of identified challenges, such as clarifying roles and responsibilities between federal and state governments, for a potential influenza pandemic. DHS concurred with our recommendation and is planning initiatives—with some underway—to address our recommendation, such as the development of pandemic contingency plan guidance tailored to each critical infrastructure sector. Similarly, during the National Governors Association’s workshops, state officials reported that they would be interested in the influenza pandemic response activities initiated in neighboring states, but few, if any mechanisms, exist for states to gain regional situational awareness. According to the National Governors Association’s report, the networks that do exist are informal communications among peers, which are built on personal relationships and are not integrated into any formal communications capacity or system. The National Governors Association also reported that states must coordinate their plans among state, local, and federal agencies and that this coordination should be tested through exercises with neighboring states and with relevant federal officials. In addition, the March 2008 planning guidance to help states update their plans notes that among the keys for successful preparation for an influenza pandemic are collaborating with other states to share promising practices and lessons learned and to collaborate with regional PFOs. Both of these collaborative relationships with other states and with the federal government could be facilitated by additional meetings and discussions within the framework of the federal pandemic regional structure. HHS is to complete distribution in 2008 of all the federal pandemic funds provided by Congress for states and localities, but HHS, DHS, and other federal agencies can continue to provide other types of support to states. Although all states have developed influenza pandemic plans, the HHS-led review of states’ influenza pandemic plans in coordination with other federal agencies found “many major gaps” in planning nationally in 16 out of 22 priority areas. While the federal government has provided influenza pandemic guidance on a variety of topics, state and local officials told us they would welcome additional guidance. These requests highlight some of the areas where federal guidance does not exist and other areas where guidance may exist, but may not have reached state and local officials or may not have addressed their particular concerns. In addition, three of the topics that state and local officials told us that they wanted federal influenza pandemic guidance on—community containment, fatality management, and facilitating medical surge—were rated as having “many major gaps” nationally among states’ influenza pandemic plans in the first HHS-led review of states’ influenza pandemic plans. Moreover, the National Governors Association’s workshops and the March 2008 planning guidance underscore the value of states collaborating with each other and the federal government for pandemic planning. With plans due in July 2008 for a second round of review, states’ plans may still have major gaps that can only be addressed by federal and state governments working together to address these challenges. Although a senior DHS official in the Office of Health Affairs reported that there are no plans to hold additional workshops in the five pandemic regions, these workshops could be a useful model both for sharing information across states and building relationships within regions and to address the identified gaps in states’ planning, and to maintain the momentum that has already been started by HHS and DHS to continue to work with the states on pandemic preparedness given the upcoming governmental transition. To help maintain a continuity of focus on state pandemic planning efforts and to further assist states in their pandemic planning, we recommend that the Secretaries of Health and Human Services and Homeland Security, in coordination with other federal agencies, convene additional meetings of the states in the five federal influenza pandemic regions to help them address identified gaps in their planning. We provided a draft of the report of the Secretaries of Health and Human Services and Homeland Security for their review and comment. HHS generally concurred with our recommendation in an e-mail. The department stated that additional regional workshops would be impractical in the short-term because of HHS’ current involvement in the update of the states’ pandemic plans. However, the department believes that the regional workshops already held were uniformly successful and is prepared to arrange for similar sessions in the future if states would find such sessions useful. HHS also provided us with technical comments, which we incorporated as appropriate. DHS generally agreed with the contents of the report and concurred with our recommendation. DHS’s comments are reprinted in appendix II. We also provided draft portions of the report to the state and local officials from the five states and 10 localities we reviewed to ensure technical accuracy. We received no comments from these states and localities. As agreed with your offices, we plan no further distribution of this report until 30 days from its date, unless you publicly announce its contents earlier. At that time, we will send copies of this report to the Secretary of Health and Human Services and the Secretary of Homeland Security; and other interested parties. We will also make copies available to others upon request. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any further questions about this report, please contact me at (202) 512-6543 or steinhardtb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report include Sarah Veale, Assistant Director; Maya Chakko, Analyst-in-Charge; Susan Sato; Susan Ragland; Karin Fangman; David Dornisch; and members of GAO’s Pandemic Working Group. The objectives of this study were to (1) describe how selected states and localities are planning for an influenza pandemic and how their efforts are involving the federal government, other state and local agencies, tribal nations, nonprofit organizations, and the private sector, (2) describe the extent to which selected states and localities have conducted exercises to test their influenza pandemic planning and incorporated lessons learned into their planning, and (3) identify how the federal government can facilitate or help improve state and local efforts to plan and exercise for an influenza pandemic. To identify how selected states and localities are planning and exercising for an influenza pandemic and how the federal government can assist their efforts, from June 2007 to September 2007, we conducted site visits to the five most populous states: California, Florida, Illinois, New York, and Texas. Recognizing that we would be limited in our ability to report on all states in detail, we selected these five states for a number of reasons, including that these states comprised over one-third of the United States population; received over one-third of the total funding from the Department of Health and Human Services (HHS) and the Department of Homeland Security (DHS) that could be used for planning or exercising for an influenza pandemic, and each state received the highest amount of total HHS and DHS funding that could be used for planning and exercising for an influenza pandemic respectively within each of the five regions established by DHS for influenza pandemic preparedness and emergency response; and were likely entry points for individuals coming from another country given that the states bordered either Mexico or Canada or contained major ports, or both, and accounted for over one-third of the total number of passengers traveling within the United States, and over half of both inbound and outbound international air passenger traffic to and from the United States. areas were selected based on having the highest population totals of all urban areas in the respective states as of July 2006 and high levels of international airport passenger traffic as of 2005. Three of these urban areas, Los Angeles County, Chicago, and New York City, also received federal pandemic funds. In addition, we asked the state officials to nominate a rural county for us to interview in their states based on the following criteria: (1) has conducted some planning or exercising for an influenza pandemic; and (2) is representative of challenges and needs that other surrounding rural counties might also be facing. The state officials in each state nominated only one rural county. We interviewed officials responsible for health and emergency management in the nominated counties of Stanislaus County (California), Taylor County (Florida), Peoria County (Illinois), Washington County (New York), and Angelina County (Texas). In total we interviewed officials with 34 different agencies, which included for each state the health, emergency management, and homeland security agencies, except for Texas which had a combined emergency management and homeland security agency, and officials responsible for health and emergency management for each urban area and rural county in the five states. In both states and localities we also typically interviewed several officials from each of the agencies. In addition, in four states and four localities reviewed, we interviewed the state or local government agencies individually, and for the remainder we interviewed the state or local government agencies together. We interviewed both urban and rural counties in order to obtain the perspectives of officials at both densely populated urban areas and rural areas. We report the results of our interviewing as counts at the level of the 15 states and localities. In general, if any one of the officials we interviewed in a particular state or locality stated a factor or issue, such as lessons learned from exercises being applied to pandemic planning, then we considered that statement to apply to the state or locality as a whole. However, a limitation of our interview methodology is that we did not comprehensively or systematically survey all interviewees across the range of interview questions. We did not interview tribal nations, and except in two cases when urban areas included private and nonprofit officials in our interviews with their agency, we did not interview private sector entities or nonprofit organizations. We focused on state and local government officials and asked these officials about their interaction with tribal nations, private sector entities, and nonprofit organizations. Finally, we interviewed the selected state and urban area’s auditors on any current or planned related audits. While the states and localities selected provided a broad perspective, we cannot generalize or extrapolate the information gleaned from the site visits to the nation. In addition, since the states that we selected were large, the most populous states, and likely entry points for people coming into the United States, the information we collected may not be as relevant to smaller, less populated states that are not likely entry points for people coming into the United States. We also reviewed the influenza pandemic planning and exercise documents from the selected states and localities. We reviewed the state and local influenza pandemic plans for common topics, however we did not analyze the quality of the documents systematically amongst those states and localities. Instead, we relied on the HHS-led assessment of whether state’s influenza pandemic plans contained 22 priority areas. We reviewed the reliability of the data reported from that assessment and determined that the data were sufficiently reliable for the purposes of this engagement. We also reviewed the states’ and localities’ exercise documents for commonalities across jurisdictions. We also interviewed HHS, Centers for Disease Control and Prevention (CDC), and DHS officials about how they are working with states and localities in planning and exercising for an influenza pandemic and reviewed documentation that they provided, including the HHS-led feedback to states on their influenza pandemic plans and the March 2008 planning guidance to assist them in updating their influenza pandemic plans. Within HHS, we met with or received information from the Deputy Director of the Office of Policy and Strategic Planning within the Office of Assistant Secretary for Preparedness and Response; the Senior Advisor to the Director, Coordinating Office for Terrorism Preparedness and Emergency Response at CDC; the Regional Inspector General, Office of Inspector General; and their staff. Within DHS, we met with and or received information from the Director and Associate Chief Medical Officer for Medical Readiness, Office of Health Affairs; the Branch Chief, National Integration Center, Federal Emergency Management Agency; the National Principal Federal Official for influenza pandemic, United States Coast Guard; the Program Director, Lessons Learned Information System; and the Deputy Inspector General, the Office of the Inspector General; and their staff. In January 2008, we observed two of the five influenza pandemic regional workshops led by HHS and DHS, in coordination with other federal agencies. The purpose of the workshops was to obtain state leaders’ input on guidance to assist their governments in updating their pandemic plans in preparation for a second HHS-led review of these plans. In addition, we reviewed prior GAO work and other relevant literature. We also interviewed officials from the National Governors Association, Association of State and Territorial Health Officials, National Association of County and City Health Officials, and the National Emergency Management Association who are working on issues related to state and local influenza pandemic activities. We obtained information on state and local activities from the state and local auditors in Kansas City, Missouri; Portland, Oregon; and New York state, who as members of the GAO Comptroller General’s Domestic Working Group, all participated in a collaborative effort to assess influenza pandemic planning in their jurisdictions. We conducted this performance audit from March 2007 to June 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Emergency Preparedness: States Are Planning for Medical Surge, but Could Benefit from Shared Guidance for Allocating Scarce Medical Resources. GAO-08-668. Washington, D.C.: June 13, 2008. Influenza Pandemic: Efforts Under Way to Address Constraints on Using Antivirals and Vaccines to Forestall a Pandemic. GAO-08-92. Washington, D.C.: December 21, 2007. Influenza Pandemic: Opportunities Exist to Address Critical Infrastructure Protection Challenges That Require Federal and Private Sector Coordination. GAO-08-36. Washington, D.C.: October 31, 2007. Influenza Vaccine: Issues Related to Production, Distribution, and Public Health Messages. GAO-08-27. Washington, D.C.: October 31, 2007. Influenza Pandemic: Further Efforts Are Needed to Ensure Clearer Federal Leadership Roles and an Effective National Strategy. GAO-07- 781. Washington, D.C.: August 14, 2007. Homeland Security: Observations on DHS and FEMA Efforts to Prepare for and Respond to Major and Catastrophic Disasters and Address Related Recommendations and Legislation. GAO-07-1142T. Washington, D.C.: July 31, 2007. The Federal Workforce: Additional Steps Needed to Take Advantage of Federal Executive Boards’ Ability to Contribute to Emergency Operations. GAO-07-515. Washington, D.C.: May 4, 2007. Financial Market Preparedness: Significant Progress Has Been Made, but Pandemic Planning and Other Challenges Remain. GAO-07-399. Washington, D.C.: March 29, 2007. Public Health and Hospital Emergency Preparedness Programs: Evolution of Performance Measurement Systems to Measure Progress. GAO-07-485R. Washington, D.C.: March 23, 2007. Homeland Security: Preparing for and Responding to Disasters. GAO-07- 395T. Washington, D.C.: March 9, 2007. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery System. GAO-06-618. Washington, D.C.: September 6, 2006. Hurricane Katrina: GAO’s Preliminary Observations Regarding Preparedness, Response, and Recovery. GAO-06-442T. Washington, D.C.: March 8, 2006. Emergency Preparedness and Response: Some Issues and Challenges Associated with Major Emergency Incidents. GAO-06-467T. Washington, D.C.: February 23, 2006. Statement by Comptroller General David M. Walker on GAO’s Preliminary Observations Regarding Preparedness and Response to Hurricanes Katrina and Rita. GAO-06-365R. Washington, D.C.: February 1, 2006. Influenza Pandemic: Applying Lessons Learned from the 2004-05 Influenza Vaccine Shortage . GAO-06-221T. Washington, D.C.: November 4, 2005. Influenza Vaccine: Shortages in 2004-05 Season Underscore Need for Better Preparation. GAO-05-984. Washington, D.C.: September 30, 2005. Influenza Pandemic: Challenges in Preparedness and Response. GAO-05- 863T. Washington, D.C.: June 30, 2005. Flu Vaccine: Recent Supply Shortages Underscore Ongoing Challenges. GAO-05-177T. Washington, D.C.: November 18, 2004. Emerging Infectious Diseases: Review of State and Federal Disease Surveillance Efforts. GAO-04-877. Washington, D.C.: September 30, 2004. Infectious Disease Preparedness: Federal Challenges in Responding to Influenza Outbreaks. GAO-04-1100T. Washington, D.C.: September 28, 2004. Public Health Preparedness: Response Capacity Improving, but Much Remains to Be Accomplished. GAO-04-458T. Washington, D.C.: February 12, 2004. Hospital Preparedness: Most Urban Hospitals Have Emergency Plans but Lack Certain Capacities for Bioterrorism Response. GAO-03-924. Washington, D.C.: August 6, 2003. Infectious Disease Outbreaks: Bioterrorism Preparedness Efforts Have Improved Public Health Response Capacity, but Gaps Remain. GAO-03- 654T. Washington, D.C.: April 9, 2003.
The Implementation Plan for the National Strategy for Pandemic Influenza states that in an influenza pandemic, the primary response will come from states and localities. To assist them with pandemic planning and exercising, Congress has provided $600 million to states and certain localities. The Department of Homeland Security (DHS) established five federal influenza pandemic regions to work with states to coordinate planning and response efforts. GAO was asked to (1) describe how selected states and localities are planning for an influenza pandemic and who they involved, (2) describe the extent to which selected states and localities conducted exercises to test their influenza pandemic planning and incorporated lessons learned as a result, and (3) identify how the federal government can facilitate or help improve state and local efforts to plan and exercise for an influenza pandemic. GAO conducted site visits to five states and 10 localities. All of the five states and 10 localities reviewed by GAO had developed influenza pandemic plans. In fact, according to officials at the Centers for Disease Control and Prevention (CDC), which administers the federal pandemic funds, all 50 states have developed an influenza pandemic plan, in accordance with federal pandemic funding requirements. At the time of GAO's site visits, officials from the selected states and localities reviewed said that they involved the federal government, other state and local agencies, tribal nations, and nonprofit and private sector organizations in their influenza pandemic planning. Since GAO's site visits, the Department of Health and Human Services (HHS) has provided feedback to the states, territories, and the District of Columbia (hereafter referred to as states) on whether their plans addressed 22 priority areas, such as policy process for school closure and communication. On average the department found that states' plans had "many major gaps" in 16 of the 22 priority areas. In March 2008, HHS, DHS, and other federal agencies issued guidance to states to help them update their pandemic plans, which are due by July 2008, in preparation for another HHS-led review. According to CDC officials, all states and localities that received the federal pandemic funds have met the requirement to conduct an exercise to test their plans. Officials from all of the states and localities reviewed by GAO reported that they had incorporated lessons learned from influenza pandemic exercises into their influenza pandemic planning, such as buying additional medical equipment, providing training, and modifying influenza pandemic plans. For example, as a result of an exercise, officials at the Dallas County Department of Health and Human Services (Texas) reported that they developed an appendix to their influenza pandemic plan on school closures during a pandemic. The federal government has provided influenza pandemic guidance on a variety of topics including an influenza pandemic planning checklist for states and localities and draft guidance on allocating an influenza pandemic vaccine. However, officials of the states and localities reviewed by GAO told GAO that they would welcome additional guidance from the federal government in a number of areas to help them to better plan and exercise for an influenza pandemic, in areas such as community containment (community-level interventions designed to reduce the transmission of a pandemic virus). Three of these areas were also identified as having "many major gaps" in states' plans nationally in the HHS-led review. In January 2008, HHS and DHS, in coordination with other federal agencies, hosted a series of meetings of states in the five federal influenza pandemic regions to discuss the draft guidance on updating their pandemic plans. Although a senior DHS official reported that there are no plans to conduct further workshops, additional regional meetings could provide a forum for state and federal officials to address gaps in states' planning identified by the HHS-led review and to maintain the momentum of states' pandemic preparedness through this next governmental transition.
Man-made perchlorate is primarily produced as ammonium perchlorate for use as an oxidizer in solid rocket fuels, fireworks, explosives, and road flares. Perchlorate can also be present as an ingredient or as an impurity in such items as matches, lubricating oils, aluminum refining, rubber manufacturing, paint and enamel manufacturing, and leather tanning and as an ingredient in bleaching powder used for paper and pulp processing. Further, perchlorate can develop as a by-product of sodium hypochlorite (i.e., bleach) solutions used as disinfectant in water and wastewater treatment plants when these solutions are stored for a long period of time. Naturally occurring perchlorate is produced through atmospheric processes and then settles on surface water or land as precipitation or dry deposits. Perchlorate also exists as a natural impurity in nitrate salts from Chile, which are imported and used to produce nitrate fertilizers and other products. EPA has the authority to regulate contaminants, such as perchlorate, in public drinking water systems. Under the Safe Drinking Water Act, as amended, when EPA decides to regulate a contaminant, its determination must be based on findings that (1) the contaminant may have an adverse health effect, (2) the contaminant is known to occur or there is substantial likelihood that the contaminant will occur in public water systems with a frequency and at levels of public health concern, and (3) in the sole judgment of the Administrator, regulation of the contaminant presents a meaningful opportunity for reducing health risks for persons served by public water systems. Perchlorate was initially identified by EPA as a potential contaminant in 1985, when it was found in wells at hazardous waste sites in California. In 1992, EPA issued a provisional reference dose for perchlorate equivalent to a concentration of 4 parts per billion in drinking water and, in 1995, issued a revised provisional reference dose with a drinking water equivalent ranging from 4 to 18 parts per billion. These reference doses were considered provisional by EPA because they had not undergone internal or external peer review. However, EPA and state regulators could use them to establish guidance levels for cleaning up contaminated groundwater. A more sensitive perchlorate detection method became available in 1997, and more states began detecting perchlorate in drinking water, groundwater, and surface water. In 1998, EPA published its first draft assessment of perchlorate exposure health risks and placed perchlorate on its Contaminant Candidate List—a list of contaminants that may require regulation under the Safe Drinking Water Act. In 1999, under Unregulated Contaminant Monitoring Rule 1 (UCMR 1), EPA required all public drinking water systems serving more than 10,000 people and 800 representative public water systems serving 10,000 or fewer people to monitor their drinking water systems for perchlorate over a 12-month period and to report the results. Also, in 1999, an external panel of independent scientists reviewed EPA’s draft risk assessment and recommended additional studies and analyses to provide more data on perchlorate and its health effects. DOD and industry researchers conducted such studies and submitted them to EPA. Based on an analysis of these studies, EPA revised its draft perchlorate risk assessment and released it for peer review and public comment in January 2002. The revised draft risk assessment included a proposed reference dose equivalent to a concentration of 1 part per billion in drinking water. DOD, industry, and some members of the scientific community disagreed with EPA’s draft risk assessment and its conclusions, including the proposed reference dose. The scientific controversy involved, among other things, the adequacy and relevance of available human data for assessing health risks, the quality and validity of some animal data, the definition of adverse health effect, and the application of uncertainty factors. After a second peer review, and in light of the criticisms from some scientists surrounding the concentration at which perchlorate presents a human health risk, DOD, NASA, DOE, and EPA asked the National Academy of Sciences, in 2003, to review the available science and EPA’s draft health risk assessment. In January 2005, the Academy’s National Research Council (NRC) recommended a reference dose for perchlorate exposure of 0.0007 milligrams per kilogram of body weight per day. EPA calculated the drinking water equivalent of this dose to be 24.5 parts per billion. EPA adopted the reference dose and, in January 2006, directed its regional offices to use 24.5 parts per billion as a preliminary remediation goal when assessing sites for cleanup under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) of 1980 and the National Oil and Hazardous Substances Pollution Contingency Plan, the regulation that implements CERCLA. In October 2008, EPA issued a preliminary determination not to regulate perchlorate and requested public comment on its findings that perchlorate occurs infrequently at levels of health concern in public water systems and that there was not a “meaningful opportunity for health risk reduction” through a national drinking water regulation. In response to stakeholder comments that provided additional scientific evaluation of the information EPA used to make its preliminary determination, EPA announced, in January 2009, that it planned to seek additional input from NRC on assumptions regarding the possible effects of perchlorate on infants and young children. Around the same time, EPA’s Office of Water published an interim health advisory for perchlorate that includes a health advisory level of 15 parts per billion. This interim health advisory level takes into account exposure from food, as well as drinking water, for pregnant women and their fetuses (the most sensitive life stage identified by NRC). The advisory provides informal technical guidance to assist state and local officials in protecting public health where perchlorate contamination of drinking water has occurred, while EPA evaluates the opportunity to reduce risks through a national drinking water standard. Following the establishment of the interim health advisory, EPA’s Office of Solid Waste and Emergency Response withdrew its preliminary remediation goal for perchlorate of 24.5 parts per billion. In its place, EPA recommended the interim health advisory level of 15 parts per billion be used as the preliminary remediation goal when assessing sites for cleanup under CERCLA. In August 2009, EPA published a notice that it would not seek additional input from NRC and instead was seeking public comment on additional approaches for interpreting the available data on the level of health concern, the frequency of occurrence of perchlorate in drinking water, and the opportunity for health risk reduction through a national drinking water standard. In April 2010, EPA’s Office of Inspector General released a report that reviewed and critiqued the risk assessment process and procedures used by EPA to develop and derive the perchlorate reference dose. As of July 2010, EPA had not yet made a final decision whether to establish a regulatory standard for perchlorate in drinking water. Several federal laws impose requirements on federal agencies related to monitoring, reporting, and cleanup of hazardous substances, pollutants, and contaminants such as perchlorate. CERCLA, as amended, better known as Superfund, requires responsible federal agencies to identify and assess releases of hazardous substances such as perchlorate and to follow CERCLA requirements in their cleanup, among other things. The CERCLA process typically follows a series of steps, which may include investigations, human health risk assessments and ecological risk assessments, evaluation and selection of cleanup approaches, and implementation of the cleanup, known as a remedial action. CERCLA itself does not establish cleanup standards. Rather, the remedial action chosen by a federal agency must meet applicable or relevant and appropriate requirements based on standards for contaminants set under state or federal laws or regulations and in consideration of other guidance. If there is no such requirement for a given contaminant, the agency must still achieve a degree of cleanup, which, at a minimum, assures protection of human health and the environment. Both existing and potential sources of drinking water are generally to be considered in assessing risk and in selecting a remedy. In general, EPA is the lead regulator for all sites on EPA’s list of some of the most contaminated sites in the country—the National Priorities List— which are commonly referred to as Superfund sites. State environmental agencies may be the lead regulator at other sites. Executive Order 12580 delegated certain CERCLA response authorities to federal agencies. In particular, DOD and DOE each have lead response agency authority for properties under their respective jurisdictions, which they are to exercise consistent with CERCLA section 120 governing federal facilities. The Superfund Amendments and Reauthorization Act established the Defense Environmental Restoration Program in 1986 and directs DOD to clean up releases of hazardous substances, such as perchlorate, at active DOD installations and formerly used defense sites in accordance with CERCLA. The Resource Conservation and Recovery Act (RCRA), as amended, requires federal agencies generating, treating, or disposing of hazardous wastes, including hazardous wastes containing perchlorate, to obtain permits and/or to comply with regulations applicable to the management of such wastes. Pursuant to its responsibilities under the Safe Drinking Water Act, in 1999, EPA promulgated the UCMR 1, which required entities, including federal agencies, operating large and selected small public water supplies to monitor their drinking water systems for perchlorate and other contaminants over a 1-year period and to report the results. The Clean Water Act requires federal agencies discharging pollutants into surface waters—such as from a wastewater treatment facility—to obtain a National Pollutant Discharge Elimination System permit from EPA and comply with its discharge limitations. Pursuant to RCRA and the Safe Drinking Water Act, EPA can issue perchlorate abatement orders to federal facilities where there is an imminent and substantial endangerment to health and other conditions are met. Since 2002, DOD has issued a series of perchlorate policies. Most recently, in April 2009, DOD issued a policy on perchlorate release management that directs the military services to, among other things, address perchlorate in the same manner that the services address other contaminants of concern. The policy adopts EPA’s preliminary remediation goal for perchlorate of 15 parts per billion in water where (1) there is an actual or potential drinking water exposure pathway and (2) no legally applicable or relevant and appropriate requirements exist under federal or state laws. NASA and DOE have issued no policies that focus exclusively on perchlorate, according to agency officials. The full extent of perchlorate occurrence is unknown because there is no national system to track detections. However, perchlorate has been found at varying levels across the nation in water and the food supply and is known to come from a variety of sources. While the sources of perchlorate at or above 100 parts per billion in the environment are generally the result of defense-related or manufacturing activities, sources of concentrations below that level can be difficult to determine. There is no national system to track perchlorate detections, so the full extent of perchlorate occurrence nationwide is unknown. In 2005, we recommended that EPA establish a formal structure to centrally track and monitor perchlorate detections. EPA officials disagreed with our recommendation, saying that the agency already had sufficient information on perchlorate concentrations in various environmental media that indicated the extent of contamination nationally and that if EPA were to implement a tracking system, the agency would require additional resources. However, as our report noted, without a formal system to track and monitor perchlorate findings and cleanup activities, EPA and the states do not have the most current and complete accounting of perchlorate as an emerging contaminant of concern, including the extent of perchlorate found and the extent or effectiveness of cleanup projects. Although there has been no nationwide sampling for perchlorate recently, nationwide sampling under EPA’s UCMR 1, which occurred between 2001 and 2005, detected perchlorate at or above 4 parts per billion in at least one sample in approximately 4.1 percent of the public drinking water systems tested. According to EPA data, perchlorate was reported in 160 of 3,865 public drinking water systems, with detections ranging from 4 to 420 parts per billion. Thirty-one of the 160 systems, or about a fifth, had detections above 15 parts per billion—EPA’s current interim drinking water health advisory level. Figure 1 shows the number of public water systems with perchlorate detections and the maximum concentration detected, according to EPA’s data. EPA and U.S. Geological Survey officials and other researchers told us that technology is now available to detect perchlorate at levels below 1 part per billion, while the analytical method used under UCMR 1 had a minimum detection level of 4 parts per billion. Sampling conducted at various times by federal agencies, including DOD, NASA, DOE, and EPA, has detected perchlorate in drinking water, groundwater, surface water, soil, and sediment. Specifically, DOD reported perchlorate detections at 284 of its installations, or almost 70 percent of the 407 installations sampled from fiscal years 1997 through 2009, with detections ranging from less than 1 part per billion to 2.6 million parts per billion. Maximum detection in parts per billion included 30 in drinking water, 230 in sediment, 6,600 in surface water, 786,000 in soil, and 2,600,000 in groundwater. Fifty-three of the 284 installations, or about 20 percent, reported perchlorate concentrations above 15 parts per billion, DOD’s current screening threshold for initiating additional site investigation when perchlorate is detected in water. According to DOD, the agency generally uses perchlorate in munitions and missiles, and its releases of perchlorate occurred primarily at maintenance facilities, rocket testing sites, and waste disposal areas. NASA found perchlorate at four of the seven facilities where it sampled for the chemical from fiscal years 1997 through 2009. According to NASA, the agency began to look for perchlorate at its facilities across the country after a more sensitive method of perchlorate detection became available in the late 1990s and in response to requests from federal and state regulators. NASA reported the highest detection of 13,300 parts per billion in groundwater in 2002 at the Jet Propulsion Laboratory in California. At the Marshall Space Flight Center in Alabama, perchlorate detections from 2000 through 2008 fell at or below 4.4 parts per billion in groundwater. According to NASA, at the Stennis Space Center in Mississippi, in 2003, the agency detected perchlorate concentrations ranging from 3.7 to 12,639 parts per billion in groundwater. At the White Sands Test Facility in New Mexico, perchlorate detections from 2006 through 2009 fell at or below 2.6 parts per billion in groundwater. At the Jet Propulsion Laboratory, NASA attributed perchlorate contamination to the disposal of perchlorate waste in underground pits during the 1940s and 1950s. According to NASA, perchlorate contamination at Stennis is associated with munitions testing. DOE detected perchlorate at the five facilities where it sampled for the chemical in fiscal years 1998 through 2009—Lawrence Livermore National Laboratory Site 300 in California, Los Alamos National Laboratory in New Mexico, the Pantex Plant in Texas, Sandia National Laboratories in New Mexico, and the Energy Technology Engineering Center at the Santa Susana Field Laboratory in California. Detections occurred in groundwater or soil and ranged from less than 1 part per billon to 3,090 parts per billion. DOE reported the highest concentrations (3,090 parts per billion) in perched groundwater at the Pantex Plant. According to DOE, perchlorate contamination resulted from historical waste management practices and testing of high explosives. As of June 2010, EPA reported perchlorate detections at 40 sites on the National Priorities List. In addition to 25 sites maintained by DOD, NASA, DOE, and the U.S. Department of the Interior, there were 15 private sites. At private sites, the highest perchlorate levels ranged from 13 to 682,000 parts per billion in groundwater. See appendix II for a list of National Priorities List sites where perchlorate has been identified as a contaminant of concern. Overall, considering detections reported by EPA and DOD, as shown in figure 2, perchlorate has been detected in 45 states, the District of Columbia, and three U.S. territories. Two states, California and Massachusetts, mandate that public water systems sample for perchlorate to ensure that public drinking water supplies in their states comply with state drinking water standards (6 parts per billion in California and 2 parts per billion in Massachusetts). Although initial testing of drinking water systems found some levels of perchlorate contamination, testing undertaken in fiscal year 2009 found no drinking water systems that violated the standard in either state, according to state officials. In California, according to state officials, they also track perchlorate in groundwater because 40 percent of the state’s drinking water supply comes from groundwater. California officials told us that perchlorate occurrence is widespread in the state, with Southern California having more detections at higher levels in groundwater than other parts of the state. According to California officials, this perchlorate came from a variety of sources including defense activities and Chilean fertilizer. In Massachusetts, perchlorate levels at or above 2 parts per billion have been found in only a few locations in groundwater and in one surface water supply, according to state officials. However, many other groundwater supplies have detected perchlorate at levels that are less than 2 parts per billion. Additionally, research conducted in Arizona and northwest Texas detected relatively low levels of perchlorate. In a 2004 report, the Arizona Department of Environmental Quality, among others, assessed the extent of perchlorate occurrence in the state’s water sources, including the Colorado River, which is known to be contaminated with perchlorate from a chemical plant near Henderson, Nevada. The study found that, while perchlorate is present in certain areas of the state, the concentrations in bodies of water not associated with industrial sites were generally at levels well below 14 parts per billion, which was Arizona’s health-based guidance level for perchlorate at the time. Also in 2004, Texas Tech University reported on the source and distribution of perchlorate in northwest Texas groundwater. The study found widespread perchlorate occurrences at very low concentrations and concluded that they were likely the result of natural processes and not caused by human activities. From 2005 to 2007, the U.S. Geological Survey published several studies in collaboration with other researchers investigating naturally occurring perchlorate in groundwater, surface water, and soils in the United States. In addition, a 2009 U.S. Geological Survey study found perchlorate from Chilean fertilizer in Long Island, New York, and concluded that other areas in the United States that used Chilean fertilizer in the late nineteenth century through the twentieth century may also contain perchlorate. In addition to the key studies cited above, smaller-scale studies have also been conducted. In addition to finding perchlorate in water and soil, Food and Drug Administration (FDA) and other researchers have found perchlorate in a variety of foods. Existing research suggests several ways that perchlorate may enter the food supply, such as the use of perchlorate contaminated water in agriculture. The most comprehensive study of perchlorate in food—FDA’s 2006 Total Diet Study—found perchlorate in 74 percent of the 285 food items tested across the country. These food items represent the major components of the American diet, such as dairy, meat, fruits, and vegetables. Certain foods, such as tomatoes and spinach, had higher perchlorate levels than others. Using the analytical results for the food samples collected, FDA researchers calculated and reported the estimated average perchlorate intake from food for the total U.S. population and 14 age and gender subgroups. Estimated average perchlorate intake from each food item varied by age and gender, but the average total consumption of perchlorate for all groups was below the 2005 NRC- recommended reference dose for perchlorate exposure of 0.0007 milligrams per kilogram of body weight per day. The highest level of average perchlorate consumption was reported for children 2 years of age, with an estimated consumption ranging from 0.00035 to 0.00039 milligrams per kilogram of body weight per day. According to the study, the average level of perchlorate consumption for these children was higher because they consume more food per their body weight, and they have different food consumption patterns—with over half of their perchlorate intake coming from dairy foods. According to an FDA official, in 2008, FDA conducted another round of Total Diet Study sampling and is in the process of compiling the data, though the FDA official we spoke with does not expect results to be published until later in 2010 or 2011. Other studies and researchers have found that certain foods are more likely than others to contain perchlorate. For example, a 2009 study by researchers at the Centers for Disease Control and Prevention found perchlorate in all types of powdered infant formula, with higher concentrations in milk-based formula. Similarly, a 2008 study on foods produced in the lower Colorado River region reported perchlorate in milk and various fruits and vegetables, including lettuce, but researchers concluded that few individuals would be exposed to perchlorate levels exceeding EPA’s reference dose. According to researchers we contacted, only one study has attempted to quantify the contribution of various sources of perchlorate to the food supply. A 2006 study concluded that Chilean fertilizer and man-made perchlorate are the main and comparable contributors to the perchlorate found in the food supply, while naturally occurring perchlorate is a lesser source. Finally, researchers we spoke with said that more studies are needed to better understand the extent to which perchlorate exists in the food supply. According to the perchlorate researchers we spoke with, concentrations of perchlorate at or above 100 parts per billion are generally the result of activities involving man-made perchlorate, such as the use of perchlorate in manufacturing or as a solid rocket propellant. Researchers we contacted told us that perchlorate detected at levels above 100 parts per billion is generally man-made and is limited to a specific area. Further, EPA, DOD, California, and Massachusetts officials told us they have generally been able to determine the likely sources of localized high concentrations of perchlorate, such as those detected at certain Superfund sites. Concentrations of perchlorate below 100 parts per billion can result from the use of man-made perchlorate, natural processes, or the use of fertilizer containing naturally occurring perchlorate. Researchers we spoke with said that naturally occurring perchlorate formed atmospherically is typically found in water or soil at 1 part per billion or less, while perchlorate found in water or soil due to Chilean fertilizer can vary in concentration ranges but generally is not found at levels greater than 30 parts per billion. Levels of perchlorate below 100 parts per billion can also be attributed to various activities, including localized uses, such as fireworks and road flares, which release perchlorate that is typically diluted over a short time period, researchers said. The sources of concentrations of perchlorate below 100 parts per billion found around the country are often difficult to determine when there are no records of historic use or when there is more than one potential source. According to researchers we spoke with, current technology can often differentiate between man-made and naturally occurring perchlorate, but it cannot yet differentiate among different sources of man-made perchlorate. DOD has funded the development of this technology, which identifies the isotopic signature or fingerprint of a perchlorate sample and compares the signature with known sources of perchlorate. According to researchers we contacted, because man-made perchlorate and naturally occurring perchlorate have different isotopic signatures, researchers can distinguish between them. However, the technology is not widely used to identify sources of perchlorate because it is expensive, and there is no EPA- or state-certified identification method available. Therefore, federal and state officials told us that they rely mainly on historical records to identify sources of perchlorate. For example, officials identify sites where they believe perchlorate was used and gather site-specific documentation to ascertain perchlorate sources. In the case of CERCLA sites, EPA officials said that they do not focus on identifying perchlorate sources. Rather, they attempt to identify the potentially responsible party for responding to the contamination, such as current or former owners and operators of a site. CERCLA explicitly identifies four types of parties that can be held responsible, including (1) owners or operators of a site; (2) former owners or operators of the site at the time hazardous substances were disposed of; (3) those who arranged for disposal or treatment of hazardous substances (often called generators); and (4) transporters of hazardous waste. According to EPA, the agency identifies responsible parties by, among other actions, reviewing documentation related to the site; conducting interviews with government officials or other knowledgeable parties; performing historical research on the site, such as searching for previous owners of the property; sampling soil or groundwater at the site; and requesting additional information from relevant parties. DOD, NASA, and DOE have sampled for perchlorate at a number of their facilities and have begun cleanup actions at some sites. According to DOD, DOE, and NASA officials, by complying with current federal and state waste disposal laws and regulations, they have lessened perchlorate releases. Further, DOD and DOE have taken additional actions to lessen perchlorate releases such as DOD’s development of perchlorate substitutes. DOD officials told us that the military services are to sample for perchlorate at their installations wherever there is a release or suspected release and follow the same CERCLA procedures as for other contaminants. In general, to determine whether to sample for perchlorate at an installation, DOD installations rely on historical records and knowledge of perchlorate use, DOD officials said. According to our analysis of DOD data from fiscal year 1997 through fiscal year 2009, DOD sampled for perchlorate at 407 installations. Of the 361 installations that reported not sampling, the primary reason cited for not sampling was that there was no history, record, or indication of perchlorate use, according to our analysis of DOD data. In addition, beginning in 2005, DOD began requiring the military services to identify and evaluate the extent to which the use of military munitions on operational ranges has resulted in the potential for munitions constituents, including perchlorate, to migrate off- range and create unacceptable risk to human health and the environment. In 2004, DOD collaborated with the state of California and finalized a procedure for prioritizing perchlorate sampling at DOD facilities in California, known as the California Prioritization Protocol. Through this procedure, DOD and California screened 924 DOD sites had the potential for perchlorate releases and concluded that the majo that rity of potential perchlorate releases associated with DOD sites had already been identified through existing environmental programs and were being addressed. Additionally, DOD and California officials agreed that, based on the results of the prioritization, the current regulatory standards for perchlorate, sampling results to date, as well as actions taken by DOD to manage new releases and remediate known perchlorate releases, it appears that DOD’s installations and formerly used defense sites are not significantly impacting California public drinking water wells. According to DOD’s current perchlorate policy, when detections in water equal or exceed an identified threshold level—currently EPA’s health advisory level of 15 parts per billion or a stricter state standard if identified by DOD—DOD is to conduct further investigations to determine whether additional action is warranted. Decisions as to whether to take further action are generally made at the military service’s installation level. According to Army, Air Force, and Navy officials, the actions taken at installations may include conducting additional sampling, identifying the contaminated media, characterizing the extent of contamination, and adding perchlorate to the installation’s list of contaminants of concern. Our analysis of data from DOD’s perchlorate database showed that military service officials had decided to take action beyond initial sampling at 48 of the 53 installations with perchlorate detections above 15 parts per billion. (See app. III.) Redstone Arsenal in Alabama and Edwards Air Force Base in California illustrate some of the actions taken by the Army and the Air Force beyond sampling to address perchlorate. Redstone Arsenal. In 2000, the Army found perchlorate in groundwater and soil at sites associated with rocket motor production. Between 2005 and 2009, the Army conducted an investigation of groundwater to characterize the nature of the contamination and examined potential treatment options, including ion exchange. According to DOD officials, the Army has identified and planned a number of actions to remove contaminated soils that serve as an on-going source of perchlorate to groundwater. The Army is drafting a memorandum of understanding with the city of Huntsville whereby the city will consult with the Army before approving any well installation requests for areas with the potential for perchlorate contamination. However, according to EPA officials, because DOD has not signed an interagency agreement for Redstone, EPA has no legal mechanism to ensure that the Army formally coordinates with adjacent government entities to limit exposure to off-site wells that may be contaminated. Finally, according to DOD officials, the Army is in the process of obtaining regulatory approval from EPA for further site investigation on some perchlorate contaminated areas, which could determine the need for and feasibility of remedial action. Edwards Air Force Base. In 1997 and 1998, the Air Force found perchlorate in groundwater at two locations associated with solid rocket propellant testing, including the North Base and the Air Force Research Laboratory. The Air Force attributes contamination at North Base to past NASA Jet Propulsion Laboratory activities at the site. However, as site owner, the Air Force has taken responsibility for responding to the release. According to Air Force officials, at North Base, the groundwater plume has stayed on the base and has not contaminated drinking water supplies. In 2003, the Air Force began operating an ion exchange system to treat perchlorate in groundwater. By 2009, the Air Force had reduced the level detected from 30,700 to 3,700 parts per billion. The Air Force also removed 50 pounds of perchlorate from the soil and reduced the level detected from 110,000 to 300 parts per billion in 2007. At the Air Force Research Laboratory sites, according to Air Force officials, the Air Force found it impractical to take remedial action because the perchlorate- contaminated groundwater was trapped in bedrock from 20 to over 200 feet below the earth’s surface and would be extremely costly to remove. Furthermore, according to Air Force officials, it would take over 1,000 years to remediate perchlorate at the sites. EPA officials we spoke with agreed that no solution existed to clean up this perchlorate. According to Air Force officials, EPA and state regulators have agreed with the Air Force’s decision not to clean up the sites. In addition, to treat perchlorate in soil, the Air Force has removed 10 cubic yards of contaminated soil and rock at one research laboratory site and has contracted for the removal of an additional 40 cubic yards of contaminated soil. EPA and state regulatory officials told us that the actions DOD takes to respond to perchlorate contamination vary, depending on the military service, installation, and personnel involved. For example, EPA officials told us that staff at Edwards Air Force Base proactively took steps to address perchlorate contamination at the base. According to Air Force officials, personnel at Edwards began investigating perchlorate occurrence in 1997. At the time, DOD had no perchlorate policy. In addition, according to EPA officials, DOD had not approved funding to treat perchlorate at Edwards, so personnel at Edwards convinced DOD to fund research on perchlorate treatment technologies at Edwards that were eventually used to remediate perchlorate at the base. In contrast, according to a New Mexico state official, for several years, the Air Force had not taken steps to remediate perchlorate at Kirtland Air Force Base despite requirements to do so under state law implementing RCRA. According to DOD officials, there is disagreement over whether further actions at Kirtland should be conducted under CERCLA pursuant to DOD’s perchlorate policy or under the state’s RCRA authority. According to state and DOD officials, the Air Force submitted a site investigation work plan in 2010 to address perchlorate releases, and Air Force officials told us that they have begun investigating the site. In addition to sampling for and, in some cases, cleaning up perchlorate, DOD has provided funding for research and development of perchlorate treatment technologies. This work, among other things, is funded mainly through two programs—the Strategic Environmental Research and Development Program and the Environmental Security Technology Certification Program. From fiscal years 1998 through 2009, DOD spent at least $84 million researching and developing perchlorate treatment technologies, according to a DOD official. According to DOD, the development and use of innovative environmental technologies support the long-term sustainability of DOD’s training and testing ranges, as well as significantly reduce current and future environmental liabilities. The programs help DOD identify better ways to treat contaminants, including perchlorate, a DOD official said. For example, several DOD installations with perchlorate detections obtained funds for pilot treatment projects from DOD and used the systems they developed to clean up perchlorate. According to NASA officials, the agency has detected perchlorate at four of the seven facilities where sampling occurred based on the historical use of perchlorate. NASA has undertaken a major perchlorate cleanup effort at one facility—the Jet Propulsion Laboratory in Pasadena, California, where NASA detected a groundwater plume that had contaminated local drinking water supplies. To respond to the release, NASA took several actions. To clean up perchlorate in groundwater at the Jet Propulsion Laboratory, NASA installed a biological fluidized bed reactor—a system that uses bacteria to treat perchlorate. To clean up perchlorate in groundwater in Altadena, California, a neighboring community, NASA installed an ion exchange system, which began operating in 2004. In addition, NASA is currently working with the city of Pasadena to construct a groundwater treatment system. According to NASA officials, all the groundwater treatment systems will need to operate for at least 18 years to clean up the perchlorate plume and, as of 2009, the systems had been operational for 5 years. As of 2010, perchlorate groundwater detections are about 150 parts per billion in the source area of contamination, compared with 13,300 parts per billion detected in 2002, according to NASA officials. NASA is monitoring perchlorate at the other three facilities where it has found perchlorate in groundwater—the Marshall Space Flight Center in Alabama, the Stennis Space Center in Mississippi, and the White Sands Test Facility in New Mexico. From 2003 to 2008, perchlorate detections at Marshall ranged up to 4.4 parts per billion at the monitoring well with the highest detections. NASA is determining what actions may be needed at Stennis, where perchlorate detections ranged up to 40,700 parts per billion at the monitoring well with the highest detections in 2005. According to NASA officials, perchlorate contamination at Stennis is associated with past DOD activities, such as munitions tests conducted more than 30 years ago. Both NASA and DOD officials told us that they are currently discussing the agency responsibilities for responding to perchlorate releases. According to a NASA official, the agency is monitoring perchlorate at White Sands as directed by the state of New Mexico and generally detections fall below 1 part per billion. In addition to monitoring at Marshall, Stennis, and White Sands, NASA officials said, for the past 25 years, the agency has conducted environmental monitoring after space launches at the Kennedy Space Center in Florida, but it has detected no perchlorate. Finally, according to DOE officials, the agency has sampled and detected perchlorate at all five facilities where there was a potential for contamination based on the use of the chemical in high explosives research, development, and testing. DOE has taken a variety of actions at these five facilities. At the Pantex Plant in Texas, in 1999, DOE detected perchlorate at 408 parts per billion in perched groundwater that sits above the regional drinking water aquifer and, in 2007, after installing additional monitoring wells, the agency detected perchlorate in the perched groundwater at concentrations up to 1,070 parts per billion, DOE officials said. In June 2009, DOE detected perchlorate as high as 3,090 parts per billion in the perched groundwater, DOE officials told us. With the approval of EPA and the state of Texas, DOE is using bioremediation to clean up perchlorate in the perched groundwater to 26 parts per billion and has put restrictions in place to prevent the use of perched groundwater without treatment. At Lawrence Livermore National Laboratory Site 300 in California, DOE first detected perchlorate in groundwater in 1998. The highest historical detection was 92 parts per billion in 2008. DOE agreed with EPA and the state of California in 2008 to clean up perchlorate to 6 parts per billion, the state’s drinking water standard. DOE is treating perchlorate using ion exchange and had reduced the highest level detected to 69 parts per billion in 2009, according to agency officials. Further, DOE is planning to study whether bioremediation can also be used to clean up the perchlorate- contaminated groundwater. At Los Alamos National Laboratory in New Mexico, DOE detected perchlorate in groundwater wells in the late 1990s. According to DOE officials, in general, current perchlorate concentrations in groundwater are less than 10 parts per billion, but detections range from 80 to 130 parts per billion in a group of deep wells that monitor a perched groundwater zone above the water supply aquifer. DOE is continuing to monitor the levels of perchlorate in groundwater, according to agency officials. At Sandia National Laboratories, also in New Mexico, between 2000 and 2009, DOE sampled for perchlorate in groundwater. Detections were at levels less than 15 parts per billion except in one well, where the highest detection in 2006 was 1,260 parts per billion. However, according to DOE officials, the Air Force sampled the well recently and detected perchlorate at only 2.7 parts per billion. In 2001, DOE detected perchlorate in soil ranging from 16.7 to 1,040 parts per billion. According to DOE officials, the state of New Mexico is currently requiring DOE to continue to monitor the levels of perchlorate in groundwater at Sandia and evaluate the need for further action. At the Energy Technology Engineering Center at the Santa Susanna Field Laboratory in California, in 2000, DOE detected perchlorate in groundwater at 18 parts per billion, in soil at 3,600 parts per billion, and in sediment at 6 parts per billion, DOE officials said. According to DOE officials, the agency is planning additional sampling at new sites. DOD, DOE, and NASA officials we contacted agreed that perchlorate contamination at their facilities was generally caused by waste disposal practices that were commonly used before the enactment of key environmental laws, such as RCRA. Historically, these practices included, among others, disposing of perchlorate waste in open pits, open burning and detonation of perchlorate, and using water to remove perchlorate residue from rocket engines, which contributed to contamination in groundwater. DOD, DOE, and NASA officials told us that their current practices for perchlorate use and disposal follow current federal and state environmental laws and regulations and, by doing so, lessen perchlorate releases. For example, DOD officials told us that whereas historically certain munitions were burned or detonated in open sites, they are now handled in contained areas and burned on steel pads subject to requirements for the management and disposal of the waste. Furthermore, according to Air Force officials, perchlorate is now removed using a dry process that seals the perchlorate before it is burned rather than a wet process that allowed it to contact the ground and potentially contaminate groundwater. In addition, at DOE’s Lawrence Livermore National Lab Site 300, to reduce the amount of contaminants in general, including ammonium perchlorate, all but one of the outside firing tables—areas outside the laboratory used to test high explosives—that could release contaminants to the environment have been closed, according to DOE officials. According to NASA officials, NASA believes that there is no contamination caused by current perchlorate use during space shuttle launches, because rapid combustion consumes virtually all of the perchlorate during the first two minutes of flight and sampling around rocket launch complexes, such as the Kennedy Space Flight Center, has detected no perchlorate. In addition to lessening perchlorate releases, from fiscal years 1999 through 2009, DOD spent at least $26 million developing perchlorate substitutes, according to a DOD official. For example, in 1999, DOD’s Army Research, Development and Engineering Command began developing perchlorate substitutes for use in weapons simulators, flares, and rockets, according to DOD officials. Regarding weapons simulators, DOD researchers have developed perchlorate substitutes for training simulator hand grenades and artillery shells for use on Army training ranges, and DOD officials estimated that production of these simulators will begin in early 2011. DOD officials estimated that the use of the new weapons simulators should reduce potential perchlorate use on Army training ranges by 35 to 70 percent. Additionally, DOD is conducting research on ways to recycle perchlorate removed from discontinued military munitions. In the absence of a federal regulatory standard for perchlorate in drinking water, California and Massachusetts have adopted their own standards. In addition, at least 10 other states have established guidance levels for perchlorate in various media. California and Massachusetts have taken a variety of actions leading to establishing state regulatory standards for perchlorate. California promulgated its drinking water standard for perchlorate of 6 parts per billion in 2007, and Massachusetts set a drinking water standard of 2 parts per billion in 2006. Each state has also identified some of the benefits and costs of setting these standards. California first identified perchlorate as an unregulated contaminant requiring monitoring in January 1997 after the chemical was found in drinking water wells near Aerojet, a rocket manufacturer in Sacramento County that had used ammonium perchlorate as a solid rocket propellant. Subsequent monitoring that year by the California Department of Public Health found perchlorate in dozens of drinking water wells near Aerojet and in southern California, principally in the counties of Los Angeles, Riverside, and San Bernardino. State level testing also found perchlorate in Colorado River water, an important source of drinking water and agricultural irrigation water for southern California. In 1997, in response to the detections of perchlorate in drinking water, the California Department of Public Health set an action level of 18 parts per billion based on the high end of EPA’s 1995 provisional reference dose range, which had a drinking water equivalent of 4 to 18 parts per billion. In 1999, the department added perchlorate to the list of unregulated contaminants that public water systems were required to monitor. In January 2002, when EPA released a revised draft reference dose for perchlorate that corresponded to 1 part per billion in drinking water, the California Department of Public Health lowered its action level to 4 parts per billion, the lower end of EPA’s 1995 provisional reference dose range of values, and the lowest level that the analytical method in use at the time could reliably measure. Also in 2002, California enacted a law requiring the Office of Environmental Health Hazard Assessment (OEHHA) to establish a public health goal and the Department of Public Health to establish a state drinking water standard for perchlorate. Under state law, before the Department of Public Health establishes a standard, OEHHA must assess the contaminant’s risks to public health. OEHHA’s risk assessment is required to contain “an estimate of the level of the contaminant in drinking water that is not anticipated to cause or contribute to adverse health effects, or that does not pose any significant risk to health.” This level is called a public health goal. To calculate the public health goal, OEHHA used data from the 2002 Greer study on the effects of perchlorate on healthy adults, the same study used by the NRC in its 2005 report, applied an uncertainty factor of 10 to protect pregnant women and infants, and assumed that 60 percent of perchlorate exposure comes from water to arrive at a proposed public health goal of 6 parts per billion. According to OEHHA, the draft public health goal for perchlorate was more extensively reviewed than any of the other public health goals that OEHHA has developed. The draft technical support document for the proposed public health goal was reviewed twice by University of California scientists. EPA also peer reviewed the document. In addition, OEHHA held two public comment periods and a public workshop on the draft document. In March 2004, OEHHA established a public health goal for perchlorate in drinking water of 6 parts per billion. In its technical support document, OEHHA made a commitment to review the NRC report assessing the potential adverse health effects of perchlorate upon its completion and, if necessary, revise the public health goal. When NRC released its report in January 2005, OEHHA reviewed the report and determined that the findings were consistent with and supported the approach that OEHHA used to develop its public health goal. By law, the California Department of Public Health is required to set a drinking water standard as close to the public health goal as is economically and technologically feasible. To determine whether the standard for perchlorate should be proposed at the public health goal level of 6 parts per billion, the Department of Public Health evaluated the feasibility of standards at different levels in terms of available analytical methods for detecting perchlorate, monitoring costs, available treatment technologies for removal to the proposed maximum contaminant level, and the estimated fiscal impact on California drinking water utilities to comply with the proposed standard. The department estimated that the total annual costs to public water systems of a drinking water standard at 6 parts per billion would be about $23.9 million a year and that the total population avoiding exposure would be 518,600, whereas the total annual cost at 10 parts per billion would be an estimated $8.7 million with about 188,360 people avoiding exposure. The department noted that while the cost impacts of a standard above 10 parts per billion would be minimal, very little public health benefit would be achieved. To further evaluate the feasibility, the department estimated that the annual costs for larger systems that exceeded the drinking water standard would be $18 per customer, while annual costs for smaller systems would be $300 to $1,580 per customer. Because of this difference, the department proposed to provide variances for smaller systems based on affordability criteria. Based on that analysis, the department promulgated a regulatory drinking water standard for perchlorate of 6 parts per billion, which became effective in October 2007. Now that a standard has been established, California public drinking water systems must monitor to ensure that the drinking water they distribute complies with this standard. Should a system exceed the standard, it must notify the Department of Public Health and the public and take steps to immediately come back into compliance. Systems in noncompliance may face fines or permit suspension or revocation, among other possible enforcement measures. California Department of Health officials told us that public water systems that exceed the standard generally treat the contaminated water or turn off the contaminated well. In addition to setting a regulatory standard for drinking water, California adopted best management practice regulations for handling materials, products, and waste that contain perchlorate. For example, those who manufacture, package, distribute, receive, or generate certain materials containing perchlorate must ensure they are properly contained in water- resistant packaging and labeled, and nonhazardous perchlorate waste must be disposed of in a hazardous waste landfill or a composite-lined portion of a nonhazardous landfill. These regulations, which were adopted in December 2005, and became effective in July 2006, apply to any person or business that manages—such as by using, processing, generating, transporting, storing, or disposing—perchlorate materials or waste, with certain exceptions. In 2001, perchlorate was detected in groundwater at the Massachusetts Military Reservation at 600 parts per billion and, in 2002, in monitoring wells upstream from drinking water wells in the adjacent town of Bourne at concentrations less than 1 part per billion. The Bourne Water District shut three municipal wells when perchlorate was detected at levels less than 1 part per billion and, in March 2002 formally requested guidance from the Massachusetts Department of Environmental Protection on the health significance of perchlorate in drinking water. Based on a review of available information on the toxicity of perchlorate, including EPA’s 2002 draft health assessment for perchlorate and draft reference dose with a drinking water limit equivalent to 1 part per billion, the department recommended that the water district notify sensitive subgroups, such as pregnant women, should perchlorate concentrations exceed 1 part per billion and advise them to avoid consuming the water. In 2003, the Massachusetts Department of Environmental Protection convened an external science advisory committee to evaluate the peer- reviewed studies on perchlorate. Given the limited number of such studies on perchlorate and its effect on sensitive populations, in February 2004, the department established a drinking water health advisory level for perchlorate of 1 part per billion consistent with EPA’s January 2002 draft perchlorate health assessment. According to state environmental officials, Massachusetts adopted an advisory level at 1 part per billion to protect sensitive populations, specifically, pregnant women and their fetuses, infants, children up to 12 years of age, and people with thyroid conditions. In March 2004, Massachusetts initiated the process for setting a drinking water standard by issuing emergency regulations requiring most public water supply systems to test for perchlorate. Perchlorate was found in 9 of 600 systems tested, with perchlorate detections ranging from just below 1 part per billion to 1,300 parts per billion. Next, to assess the health risks of perchlorate exposure, department toxicologists and an external science advisory committee reviewed scientific studies, including the 2005 NRC perchlorate study, as well as other information that had recently become available, such as a 2005 study on perchlorate in breast milk and data made available by FDA on perchlorate in food. To calculate a reference dose for perchlorate, Massachusetts used the lowest-observed-adverse- effect level from the Greer study as the point of departure. Given the limited sample size of the study (i.e., 37 subjects), Massachusetts used a larger uncertainty factor (100) than applied by the NRC (10) to be more protective of infants and pregnant women and their fetuses, and to allow for data gaps. The department also assumed a 20 percent exposure from drinking water to take into account the various other potential sources and exposure pathways of perchlorate (i.e., food), especially for infants and pregnant women, which resulted in a reference dose for perchlorate with a drinking water equivalent level less than 1 part per billion. To arrive at a drinking water standard, the department considered information on the availability and feasibility of testing and treatment technologies, as well as data that demonstrated that perchlorate can enter drinking water as a by-product of hypochlorite (e.g., bleach) solutions used as disinfectants in water treatment plants. The department chose to set the standard at a level that does not create any disincentive for public water systems to disinfect their water supplies. The department determined that a maximum contaminant level of 2 parts per billion would provide the best overall protection of public health, considering the benefits of disinfection, while retaining a margin of safety to account for uncertainties in the available data. In July 2006, Massachusetts became the first state to set a drinking water standard for perchlorate. At the same time, Massachusetts set cleanup standards for perchlorate, including a 2 parts per billion cleanup standard for groundwater that could be classified for drinking water. In addition to setting a regulatory standard for perchlorate, Massachusetts has also taken action to minimize potential problems associated with perchlorate by issuing best management practices guidance for blasting operations and for fireworks displays. Also, Massachusetts officials reported that they are working with EPA to develop guidance for the use of hypochlorite solutions in water treatment plants. While California and Massachusetts estimated the costs and benefits of setting standards for perchlorate as part of their regulatory processes, neither state has conducted a comprehensive analysis of the actual costs and benefits of their perchlorate regulations. However, according to California officials, setting a regulatory standard for perchlorate has benefited public health. Massachusetts officials also cited protecting public health, particularly children’s health, as a key benefit, and added that cleaning up water supplies can also decrease the levels of perchlorate in food. However, while both states estimated the benefits in terms of the reduction in the number of people who would be exposed to perchlorate, they did not attempt to quantify the dollar value of these benefits. In addition, officials from both states told us that having a regulatory standard allows the state and public water utilities to identify polluters and hold them accountable for remediation. In particular, California officials told us that adopting a perchlorate regulation ended DOD’s reluctance to take action in response to perchlorate releases. Massachusetts officials reported that adopting a standard provided the impetus for the military to conduct perchlorate cleanup. Further, Massachusetts officials said that having a standard provides a simple and less costly means for determining whether remediation is necessary, as well as when no further remedial response action is necessary. Officials from both states said that their regulatory programs had costs to the state. While California officials acknowledged that there were administrative costs associated with developing its drinking water standard, they did not have data on those costs. EPA regional officials also cited the loss of water resources when contaminated wells were taken out of service as a cost to the state and noted that additional costs may be incurred to clean up the water should the state have to put some of these wells back into service because of drought conditions. Massachusetts reported that the process used to establish a drinking water standard cost the state approximately $1.35 million, or the equivalent of about 9 staff years. However, additional costs for monitoring and cleanup have been minimal because the number of public water systems with perchlorate detections above the level of concern has been small. Officials from both states said that their perchlorate regulation also had costs to public water systems, including initial and ongoing monitoring costs, capital and construction costs to install treatment facilities, and operations and maintenance costs. Initial and ongoing monitoring costs. California state officials estimated that to sample for perchlorate costs an average of $88 per sample, while Massachusetts state officials estimated an average of $125 per sample. The number of samples taken will vary by public water system and whether sampling shows that the system is out of compliance with the state’s drinking water standard. While each state estimated that monitoring costs would be higher initially because all public systems would be required to sample for perchlorate, officials from each state reported that most public water systems are compliant and now only need to conduct annual monitoring. Capital and construction costs to install treatment facilities. In general, determining the capital cost of a treatment facility, such as a blending station, an ion exchange facility, or a biological fluidized bed reactor, will depend on the individual site, according to California officials. Some of the factors that can play a role in the cost include the concentration of perchlorate, evidence of other contaminants, the need to purchase additional land, and construction costs. According to officials from each state, ion exchange is the technology generally used for treating perchlorate in drinking water, although California has also identified biological fluidized bed reactors as a cost-effective technology. Ion exchange systems have relatively low capital costs and are simpler to operate compared with biological fluidized bed reactors, which have higher capital costs and take up more space, according to officials at Aerojet. Operations and maintenance costs. Operations and maintenance costs will vary by type of treatment facility, water quality, and system flow rate. California officials noted that an ion exchange system is more expensive to operate than a fluidized bed reactor because of the cost of replacing the resin to which perchlorate molecules adhere as water passes through the system. When the resin becomes saturated with perchlorate, it must be replaced and disposed of as waste. In comparison, a fluidized bed reactor creates no waste disposal problem. Treatment costs for an ion exchange system can run about $165 to $185 per acre foot of water, whereas treatment costs for a fluidized bed reactor can run about $35 to $65 per acre foot, according to officials at Aerojet. California officials told us that the high operating costs of ion exchange can cause financial problems for small water systems. For this reason, California allows a water system serving less than 10,000 persons to apply to the department for a variance from the perchlorate drinking water standard if water system officials can demonstrate that the estimated annualized cost per household for treatment to comply exceeds 1 percent of the median household income in the community within which the customers served by the water system reside. In addition to the regulatory standards set by California and Massachusetts, at least 10 states have established for various purposes guidance levels for perchlorate ranging from 1 part per billion to 18 parts per billion for drinking water and from 1 part per billion to 72 parts per billion for groundwater. Depending on the state, a particular level may trigger public notice, serve as a screening tool for further action, or guide cleanup action, among other things. Table 1 provides a listing of state guidance levels for perchlorate in drinking water. Table 2 provides a listing of state guidance levels for perchlorate in groundwater. In addition, two states—Illinois and Wisconsin—have proposed regulatory standards for perchlorate in groundwater. Finally, New Jersey proposed a drinking water standard of 5 parts per billion in 2009, but the state’s newly appointed Commissioner of the Department of Environmental Protection decided in March 2010 to delay adopting a standard until EPA made its regulatory determination, and New Jersey’s proposed rule has lapsed. We provided a draft copy of this report to DOD, DOE, EPA, and NASA for review and comment. We received a written response from the Assistant Deputy Under Secretary of Defense (Installations and Environment). DOD believes that the report omitted a number of important facts and conclusions, including the major conclusions of the California Prioritization Protocol, the sources of perchlorate in Massachusetts, the amount of perchlorate imported primarily for fireworks compared with the amount of perchlorate used by DOD, information on the health risks of perchlorate, and the conclusions of the EPA Office of Inspector General’s report regarding perchlorate health risks. We do not agree. We believe the report contains the most important facts relevant to our objectives. Nonetheless, in response to DOD’s comments, we did modify the report to provide some additional details on the results of the California Prioritization Protocol. However, we made no changes regarding the sources of perchlorate contamination in Massachusetts because this information was already included in our description of Massachusetts’ actions to regulate perchlorate. We did not include information on the amount of perchlorate imported into the United States, the health risks of perchlorate, and the conclusions of the EPA Office of Inspector General’s report, because these issues were beyond the scope of our report. For example, we were asked to report on what is known about the likely sources of perchlorate in the nation’s water and food supply, not on the amount of perchlorate used for different purposes. Although an organization may use a significant amount of perchlorate for a specific purpose, the quantity used is not necessarily indicative of the amount of perchlorate released into the environment. Similarly, we were not asked to assess the public health risks of perchlorate exposure, so we did not address it in this report. Moreover, the scientific community is still debating health risk and, as we mentioned in the report, EPA has not yet made a final decision whether to set a regulatory standard for perchlorate in drinking water. DOD also provided technical comments, which we incorporated into the report as appropriate. DOD’s comments and our detailed responses are presented in appendix IV of this report. DOE and EPA did not provide formal comments. However, they provided technical comments by e-mail, which we incorporated as appropriate. NASA had no comments on the report. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, Secretaries of Defense and Energy, Administrators of the Environmental Protection Agency and National Aeronautics and Space Administration, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or stephensonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. This report examines (1) what is known about the extent to which perchlorate occurs in the nation’s water and food supply and its likely sources; (2) what actions the Department of Defense (DOD), the National Aeronautics and Space Administration (NASA), and the Department of Energy (DOE) have taken to respond to or lessen perchlorate releases; and (3) the actions states, such as California and Massachusetts, have taken to regulate perchlorate. To determine what is known about the extent to which perchlorate occurs in the nation’s water and food supply and its likely sources, we took a variety of actions. To determine what is known about the extent of perchlorate occurrence in the nation’s public drinking water systems, we obtained and analyzed sampling data collected from 2001 through 2005 under EPA’s Unregulated Contaminant Monitoring Rule 1. We assessed the procedure EPA used to collect the data by reviewing the statistical design, sample selection, and quality control methods used, and determined that the procedure was sufficiently reliable for the purposes of this report. To determine what is known about the extent of perchlorate occurrence in water and other media at DOD, NASA, and DOE installations and facilities, we obtained data on perchlorate occurrence at facilities owned or managed by these agencies. Specifically, at DOD, we obtained and analyzed data from their Perchlorate Survey Database for fiscal years 1997 through 2009. We assessed the reliability of the data for relevant variables by electronically testing for obvious errors in accuracy and completeness. We also reviewed information about the data and the systems that produced them and interviewed officials knowledgeable about the data. When we found inconsistencies in the data, we worked with the officials responsible for the data to clarify these inconsistencies before conducting our analyses. We determined that the data were sufficiently reliable for the purposes of reporting on perchlorate sampling and detections at the installations tracked by the database. We reviewed data provided by NASA and DOE on perchlorate detections reported by their facilities. We also interviewed officials from DOD, NASA, and DOE to determine that all data were reported. To determine what additional information existed on the extent of perchlorate occurrence in water, we obtained data from EPA on perchlorate occurrence at facilities on the National Priorities List—known as Superfund sites. We also reviewed perchlorate occurrence data provided by state environmental agencies in California, Massachusetts, Arizona, and Texas. To determine what is known about the extent of perchlorate occurrence in the nation’s food supply, we performed a literature search to identify research on perchlorate occurrence in food. We reviewed the results of research conducted by the Food and Drug Administration (FDA), the U.S. Department of Agriculture, the Centers for Disease Control and Prevention, and academic researchers. We also interviewed officials from FDA, the U.S. Department of Agriculture, and EPA, as well as researchers at academic and private institutions, to identify what is known about the extent of perchlorate in the food supply, the relative source contributions, and any gaps in knowledge. To determine what is known about the likely sources of perchlorate, we reviewed research literature examining the different sources of man-made perchlorate and its uses, as well as the conditions under which perchlorate occurs naturally. We also interviewed EPA, U.S. Geological Survey, and state officials; researchers from a consortium of public, private, and academic entities developing an analytical method to determine the sources of perchlorate; and other stakeholders to obtain information on the history of perchlorate use, as well as developments in technology to determine the sources of known perchlorate occurrences. To determine the actions DOD, NASA, and DOE have taken to respond to perchlorate releases, we reviewed and analyzed DOD data on perchlorate occurrence from DOD’s Perchlorate Survey Database, DOD state summaries, NASA and DOE perchlorate occurrence data, EPA data on perchlorate occurrence at facilities on the National Priorities List, and state regulatory agency reports. We also obtained and reviewed documentation from federal and state agencies on the actions these three agencies have taken to respond to perchlorate releases and the status of these actions. We also interviewed agency officials and officials from state and other federal agencies to obtain information and their views on (1) the actions DOD, NASA, and DOE have taken to respond to perchlorate releases; (2) the status of these actions; and (3) whether these actions have lessened perchlorate releases. We visited the following DOD and NASA facilities to discuss and observe their activities related to perchlorate cleanup: Edwards Air Force Base (DOD), Redstone Army Arsenal (DOD), the Jet Propulsion Laboratory (NASA), and the Marshall Space Flight Center (NASA). We selected sites to visit that were identified by EPA, DOD, and NASA officials as illustrative of their perchlorate response actions. To determine the actions DOD, NASA, and DOE have taken to lessen perchlorate releases, we reviewed documents from agency officials and discussed current policies and practices they follow to lessen perchlorate releases. We also visited Aerojet, a private facility that manufactures and tests rocket engines for the space and defense industries, to discuss and observe the operation of two types of perchlorate treatment facilities that are also being used by federal agencies. To determine the actions California and Massachusetts have taken to regulate perchlorate, we reviewed state documents, such as perchlorate occurrence reports, risk assessments, and cost benefit analyses, and interviewed state officials. To determine the actions of other states to regulate perchlorate, we interviewed EPA regional officials and obtained information from the Association of State Drinking Water Administrators and identified states that have set advisory levels and cleanup goals for perchlorate. We interviewed environmental and public health officials from these states and obtained and reviewed documents related to perchlorate guidance for drinking water and groundwater. Appendix II: National Priorities List Sites Where Perchlorate Has Been Identified as a Contaminant of Concern Redstone Arsenal (Army/NASA) Phoenix Goodyear Airport Area, Unidynamics Aerojet General Corp. Edwards Air Force Base, Air Force Research Laboratory Edwards Air Force Base, Jet Propulsion Laboratory El Toro Marine Corps Air Station Lawrence Livermore National Laboratory Site 300 Mather Air Force Base (former) McClellan Air Force Base (former) San Fernando Valley, Area 2-Glendale San Gabriel Valley, Area 1-El Monte San Gabriel Valley, Area 2-Baldwin Park San Gabriel Valley, Area 4-Puente Valley Sangamo Electric Dump/Crab Orchard National Wildlife Refuge Fort Devens, South Post Impact Area Naval Surface Warfare Center–Indian Head Ordnance Products, Inc. Lake City Army Ammunition Plant Chemtronics (aka Amcel Propulsion Inc.) Marine Corps Air Station Cherry Point Marine Corps Base Camp Lejeune Nebraska Ordnance Plant (former) Radiation Technology, Inc. Shieldalloy Corp. Allegheny Ballistics Laboratory, Alliant Techsystems, Inc. According to EPA, additional National Priorities List sites may have perchlorate at some level. However, EPA does not currently have enough information to determine whether perchlorate is a contaminant of concern at those sites. The following are GAO’s comments on the Department of Defense’s letter dated July 26, 2010, and provided by the Assistant Under Secretary of Defense (Installations and Environment). 1. We revised the text to provide some additional detail about the California Prioritization Protocol. 2. We disagree with DOD’s comment that, while the report mentions the results of perchlorate sampling in Massachusetts, it fails to mention that none of these detections were related to military sources and to describe the perchlorate sources that were determined by the state. This information appears on page 31 in the section of the report describing Massachusetts’ actions to regulate perchlorate. 3. Information on the amount of perchlorate imported primarily for fireworks compared with the amount of perchlorate used by DOD is beyond the scope of this report, which focuses on the extent and likely sources of perchlorate occurrence, and federal agency actions to respond to and lessen releases. Although an organization may use a significant amount of perchlorate for a specific purpose, the quantity used is not necessarily indicative of the amount of perchlorate released into the environment. 4. A discussion of the public health risks of perchlorate is beyond the scope of this report. The scientific community is still debating health risks associated with perchlorate. 5. Appendix III describes the actions DOD has taken to respond to perchlorate releases and notes when DOD’s assessment concluded that no further action is required. 6. We revised appendix III to note that DOD does not apply the 15 parts per billion screening level to soil. 7. A discussion of the public health risks of perchlorate is beyond the scope of this report. 8. This report draws no conclusions regarding the human health threat that DOD releases of perchlorate currently pose to public drinking water supplies because it is beyond the scope of our work. 9. A discussion of DOD’s efforts to verify the conclusions from its sampling program with state and federal regulators is beyond the scope of our report. 10. A discussion of the public health risks of perchlorate is beyond the scope of this report. 11. Because a discussion of the public health risks of perchlorate is beyond the scope of this report, we did not evaluate or report on the conclusions of the Inspector General’s report in this regard. 12. We disagree with DOD’s comment that our title is misleading. DOD is only one of three federal agencies whose actions we describe in the report and, therefore, we believe that the title is appropriate. 13. The report does not characterize the significance of detections. Rather, we note the range of detections at DOD installations and the number of installations with detections above 15 parts per billion—DOD’s current threshold level for conducting further investigation when perchlorate is detected in water to determine whether additional action is warranted. 14. The report mentions that sodium hypochlorite solutions used as a disinfectant in water and water treatment plants is a source of perchlorate. See pages 2 and 32. 15. We revised the text to clarify the DOD sampling information presented in the report, which includes the results of GAO’s analysis of data that exists only in narrative format. 16. We revised the text to include the Army’s description of actions taken at Redstone Arsenal. 17. We revised the text to clarify the Air Force’s position on the status of actions being taken to respond to perchlorate at Kirtland Air Force Base. 18. We revised appendix II to show that Mather and McClellan Air Force Bases are closed. 19. We revised appendix III to attribute Camp Edwards/Massachusetts Military Reservation to both the Air Force and the Army. 20. In appendix III, we revised the action column for McAlester Ammunition Plant, China Lake Naval Air Weapons Station, El Centro Naval Air Facility, NOLF San Nicolas Island, and NWS Seal Beach Detachment Fallbrook to reflect the information provided by DOD. John B. Stephenson, (202) 512-3841, or stephensonj@gao.gov. In addition to the individual named above, Stephen Secrist, Assistant Director; Elizabeth Beardsley; Mark Braza; N’Kenge Gibson; Mitchell Karpman; Susan Malone; Madhav Panwar; Jeremy Sebest; Ben Shouse; Matthew Tabbert; and Kiki Theodoropoulos made key contributions to this report.
Perchlorate is both a man-made and naturally occurring chemical. It is used in rocket fuel, explosives, fireworks, and other products. Naturally occurring perchlorate is produced through atmospheric processes and then settles on surface water or land. Perchlorate can disrupt the uptake of iodide in the thyroid, potentially interfering with thyroid function and negatively affecting fetal and infant brain development and growth. As of June 2010, there is no federal regulatory standard for perchlorate in drinking water, and the Environmental Protection Agency (EPA), which has the authority to regulate contaminants in public drinking water systems, had not determined whether to establish one. The Department of Defense (DOD), the National Aeronautics and Space Administration (NASA), and the Department of Energy (DOE) are the primary federal users of perchlorate. GAO was asked to examine (1) what is known about the extent to which perchlorate occurs in the nation's water and food supply and its likely sources; (2) what actions DOD, NASA, and DOE have taken to respond to or lessen perchlorate releases; and (3) what actions states, such as California and Massachusetts, have taken to regulate perchlorate. To address these questions, GAO analyzed data from EPA, DOD, NASA, and DOE, reviewed agency documents, and interviewed federal and state officials, researchers, and others. Perchlorate has been found in water and other media at varying levels in 45 states, as well as in the food supply, and comes from a variety of sources. EPA conducted one nationwide perchlorate sampling, between 2001 and 2005, and detected perchlorate at or above 4 parts per billion in 160 of the 3,865 public water systems tested (about 4.1 percent). In 31 of these 160 systems, perchlorate was found above 15 parts per billion, EPA's current interim health advisory level. Sampling by DOD, NASA, and DOE detected perchlorate in drinking water, groundwater, surface water, soil, and sediment at some facilities. For example, GAO's analysis of DOD data showed that perchlorate was detected at almost 70 percent of the 407 installations sampled from fiscal years 1997 through 2009, with detections ranging from less than 1 part per billion to 2.6 million parts per billion. A 2006 Food and Drug Administration study found perchlorate in 74 percent of 285 food items tested, with certain foods, such as tomatoes and spinach, having higher perchlorate levels than others. According to researchers, concentrations of perchlorate at or above 100 parts per billion generally result from activities involving man-made perchlorate, such as the use of perchlorate as a rocket propellant. Lower concentrations can result from the use of man-made perchlorate, atmospheric processes, or the use of fertilizer containing naturally occurring perchlorate. According to DOD, NASA, and DOE officials, the agencies have sampled, monitored and, at several sites, begun cleaning up perchlorate. When DOD detects perchlorate at or above threshold levels--currently 15 parts per billion for water--DOD is to investigate further and may take additional actions. DOD has taken actions beyond initial sampling at 48 of the 53 installations with perchlorate detections above 15 parts per billion. NASA is in the midst of a cleanup at the Jet Propulsion Laboratory in California and is monitoring the level of perchlorate in groundwater at three other facilities. In addition, DOE is cleaning up perchlorate at two facilities involved in high explosives research, development, and testing and is monitoring the level of perchlorate in groundwater at two other facilities. According to DOD, NASA, and DOE officials, the perchlorate detected at their facilities is largely the result of past disposal practices. Officials at these agencies told us that by complying with current federal and state waste disposal laws and regulations, they have lessened their perchlorate releases. In addition, DOD is developing perchlorate substitutes for use in weapons simulators, flares, and rockets. In the absence of a federal regulatory standard for perchlorate in drinking water, California and Massachusetts have adopted their own standards. California adopted a drinking water standard of 6 parts per billion in 2007, and Massachusetts set a drinking water standard of 2 parts per billion in 2006. The key benefits of a regulatory standard cited by state officials include protecting public health and facilitating cleanup enforcement. However, limited information exists on the actual costs of regulating perchlorate in these states. Also, at least 10 other states have established guidance levels for perchlorate in drinking water (ranging from 1 to 18 parts per billion) or in groundwater. This report contains no recommendations.
USPS has taken steps to respond to most of our prior recommendations to strengthen planning and accountability for its network realignment efforts. It has clarified how it makes realignment decisions and generally addressed how it integrates its realignment initiatives, but it has not established measurable performance targets for these initiatives. USPS believes that its budgeting process accounts for the cost reductions achieved through these initiatives. In our 2007 report we stated that without measurable performance targets for achieving its realignment goals, USPS remains unable to demonstrate to Congress and other stakeholders the costs and benefits associated with its network realignment initiatives. We also reported that although USPS had made progress on several of its realignment initiatives, it remained unclear how the various initiatives were individually and collectively contributing to the achievement of realignment goals because the initiatives lacked measurable targets. Appendix I provides a brief description and identifies the status of USPS’s key realignment initiatives. Appendix II provides updated status information for all AMP consolidations through July 2008. PAEA calls for USPS to, among other matters, establish performance goals and identify anticipated costs, cost savings, and other benefits associated with the infrastructure realignment alternatives in its Network Plan. The Network Plan describes an overall goal to create an efficient and flexible network that results in lower costs for both the Postal Service and its customers, improves the consistency of mail service, and reduces the Postal Service’s overall environmental footprint. In addition, the plan states that USPS’s goals are continuous improvement and savings of $1 billion per year through realignment and other efforts. According to the plan, USPS will achieve these savings, in part, through three core realignment initiatives, including Airport Mail Center (AMC) closures, AMP consolidations, and Bulk Mail Center (BMC) transformations. The specificity of the expected savings and other benefits related to the core initiatives varies in the plan’s discussion of measurable goals, targets, and results achieved. Overall program targets: USPS estimated total savings of $117 million for AMC closures—including savings of $57 million in 2008 and $21 million in 2009—but provided no such figure for the AMP consolidations. Postal officials told us USPS is developing an overall program target for transforming the BMCs. Evaluation of results: USPS has measured the results of its AMP consolidations through a post-implementation review. In 2007, we identified data consistency problems with this review. USPS has addressed these problems in an updated handbook issued in 2008, by revising its data calculation worksheets. No analogous process exists for measuring the results of USPS’s AMC closures, which included outsourcing some operations conducted at these facilities, relocating some operations to other postal facilities, and closing some facilities. We are issuing a report today on USPS’s outsourcing activities, which discusses USPS’s realignment decisions related to its AMCs. As part of this review, we concluded that USPS does not track and could not quantify the results of its outsourcing activities. We recommended that USPS establish a process to measure the results and effectiveness of those outsourcing activities that are subject to collective bargaining, including the AMCs. USPS agreed to establish a process for future outsourcing initiatives subject to collective bargaining, in which it would compare the financial assumptions that supported its outsourcing decision with actual contract award data 1 year after project implementation. When we met with USPS officials in June 2008, we asked why they did not have measurable performance goals and targets for the individual realignment initiatives. The Deputy Postmaster General explained that the realignment targets are captured in USPS’s goal of saving $1 billion per year. Specifically, he explained that USPS will present its overall goals and targets in more detail as part of its internal budget, which will be presented to the Board of Governors in July 2008. USPS will have additional opportunities to provide information about its estimated costs and cost savings related to its realignment efforts in its annual report to Congress, which is required by the end of December. Developing and implementing more transparent performance targets and results can help inform Congress about the effectiveness of USPS’s realignment efforts. In 2007, we found there was little transparency into how USPS’s efforts were integrated with each other. We recommended that USPS explain how it will integrate the various initiatives that it will use in realigning the postal facilities network. In its Network Plan, USPS identifies three major realignment efforts: (1) Airport Mail Center closures, (2) consolidations of Area Mail Processing operations and (3) transformations of Bulk Mail Centers. USPS briefly addresses the integration of its network initiatives, stating that their overall impact and execution are tightly integrated, and provides a few examples, but little contextual information about what its future network will look like and how its realignment goals are being met. In a recent meeting, senior USPS officials provided more information that helps to put the integration of USPS’s three network realignment initiatives in context. They said this integration is expected to reduce USPS’s network and shrink its mail processing operations. After integrating these three efforts, they said, USPS will continue to be the “first and last mile”—the “first mile” being the point of entry for mail into the system, and the “last mile” being the delivery of mail to customers nationwide, as required to meet USPS’s universal service mission. They expect to lower costs and achieve savings by reducing excess processing capacity and fuel consumption, as well as by working with the mailing industry to implement new technologies such as delivery point sequencing, flats sequencing, and Intelligent Mail.® Going forward, USPS has opportunities, in its annual report to Congress and in other reports and strategic plans, to further articulate how it plans to integrate these three initiatives and to what extent they are helping USPS meet its goals. USPS has partially responded to our prior recommendations related to improving delivery performance information by establishing delivery performance standards and committing to develop performance targets against these standards and provide them to the PRC in August. However, full implementation of performance measures and reporting is not yet completed. Delivery service performance is a critical area that may be affected by the implementation of the realignment initiatives. Delivery standards are essential for setting realistic expectations for mail delivery so that USPS and mailers can plan their mailing activities accordingly. Delivery performance information is critical for stakeholders to understand how USPS is achieving its mission of providing universal postal service, including requirements for the prompt, expeditious, and reliable delivery of mail throughout the nation. Delivery performance data are also necessary for USPS and its customers to identify and address delivery problems and to enable Congress, the PRC, and others to hold management accountable for results and to conduct independent oversight. Our July 2006 report found that USPS’s delivery performance standards, measurement, and reporting needed improvement. We recommended that USPS update its outdated delivery standards, which did not reflect postal operations and thus were unsuitable for setting realistic expectations and measuring performance. We also recommended that the Service implement representative measures of delivery performance for all major types of mail because only one-fifth of mail volume was being measured and there were no representative measures for Standard Mail, bulk First-Class Mail, Periodicals, and most Package Services. Furthermore, we recommended that USPS improve the transparency of its delivery standards, measurement, and reporting. In December 2006, Congress enacted postal reform legislation that required USPS to modernize its delivery standards and measure and report to the PRC on the speed and reliability of delivery for each market-dominant product. Collectively, market-dominant products represent 99 percent of mail volume. In December 2007, USPS issued its new delivery standards and has committed to measuring and reporting on delivery performance for market-dominant products starting in fiscal year 2009. Moreover, USPS provided a specific proposal for measuring and reporting its delivery performance to the PRC, which has requested public comment on USPS’s proposal. Full implementation of delivery performance measures and reporting for all major types of mail will require both mailers and USPS to take actions to barcode mail and track its progress—a system referred to as Intelligent Mail®. USPS has taken steps to respond to our recommendations that it improve its communication of realignment plans and proposals with stakeholders. For key realignment efforts such as AMP consolidations, we found it is critical for USPS to communicate with and engage the public. Stakeholder input can help USPS understand and address customer concerns, reach informed decisions, and achieve buy-in. In our 2007 report, we concluded that USPS was not effectively engaging stakeholders and the public in its AMP consolidation process and effectively communicating decisions. For example, USPS was not clearly communicating to stakeholders what it was planning to study, why studies were necessary, and what study outcomes might be. In addition, USPS did not provide stakeholders with adequate notice of the public input meeting or materials to review in preparation for the meeting. Furthermore, according to stakeholders, USPS offered no explanation as to how it evaluates and weighs public input in its decision-making process. To help resolve these and other issues concerning how USPS communicates its realignment plans with stakeholders, we recommended that USPS take the following actions: Improve public notice. Clarify notification letters by explaining whether USPS is considering closing the facility under study or consolidating operations with another facility, explaining the next decision point, and providing a date for the required public meeting. Improve public engagement. Hold the public meeting during the data- gathering phase of the study and make an agenda and background information, such as briefing slides, available to the public in advance. Increase transparency. Update AMP guidelines to explain how public input is considered in the decision-making process. USPS has incorporated into its 2008 AMP Communication Plan several modifications aimed at improving public notification and engagement. Most notably, USPS has moved the public input meeting to an earlier point in the AMP process and plans posts a meeting agenda, summary brief, and presentation slides on its Web site 1 week before the public meeting. USPS has increased transparency, largely by clarifying its processes for addressing public comments and plans to make additional information available to the public on its Web site. In 2007, we found that stakeholders potentially affected by AMP consolidations could not discern from USPS’s initial notification letters what USPS was planning to study and what the outcomes of the study might be. This lack of clarification led to speculation on the part of stakeholders, which in turn increased public resistance to USPS’s realignment efforts. The initial notification letters were also confusing to stakeholders because they contained jargon and lacked adequate context to understand the purpose of the study. Furthermore, in 2007 we reported that stakeholders were not given enough notice about the public meeting, and we recommended that USPS improve public notice by providing stakeholders with a date for the public meeting earlier in the AMP process. In its 2008 AMP Communication Plan, USPS has eliminated most of the jargon from its notification letters and has generally provided more context as to why it is necessary for USPS to conduct the feasibility studies. For example, letters now name both facilities that would be affected by a proposed consolidation, whereas previously, only one facility was named. USPS also added a requirement that the public be notified at least 15 days in advance of a public meeting. In 2007, we found that public meetings required for AMP consolidations were occurring too late in the decision-making process for the public to become engaged in this process in any meaningful way. At that time, the meetings were held after the area office and headquarters had completed their reviews of the AMP consolidation studies and just before headquarters had made its final consolidation decisions. Stakeholders we spoke with were not satisfied with the public input process and told us that USPS solicited their input only when it considered the AMP consolidation a “done deal.” We also found that USPS did not publish agendas in advance of public meetings or provide the public with much information about the proposed studies. The only information available was a series of bullet points posted on USPS’s Web site several days before the meetings. This lack of timely and complete information further inhibited the public’s ability to meaningfully participate in the process. To make the meetings more focused and productive, and to give the public an opportunity to adequately prepare for them, we recommended that USPS make an agenda and background information available to the public in advance of the public meetings. Although USPS still holds the public meetings after the data-gathering phase of the study has been completed, the meeting now occurs earlier in the AMP review process. Currently, before the meeting, the study has been approved only at the district level—the area office and headquarters have not yet completed their reviews or validated the data by the time of the meeting. When we asked USPS why it did not move the meeting to the data-gathering phase of the study, USPS officials responded that it would be difficult to hold the meeting during the data-gathering phase because at that point, they do not know what operations could potentially be consolidated. However, to ensure that the public meeting is held within a reasonable amount of time after the study’s completion, USPS included a requirement in its 2008 AMP Communication Plan that the public meeting take place within 45 days after the District Manager forwards the study to the area office and headquarters. In addition, the initial notification letter now includes contact information for the local Consumer Affairs Manager, to whom the public can submit written comments up to 15 days after the public meeting; previously, this contact information appeared in the second notification letter. To help stakeholders better prepare for the public meeting, USPS plans to post a meeting agenda, presentation slides, and a summary brief of the AMP proposal on its Web site 1 week before the meeting. In addition, USPS plans to inform stakeholders in the public meeting notification letter that these materials will be posted on its Web site 1 week before the meeting. In our 2007 report, we found that stakeholders and the public were unclear as to how public input factored into USPS’s consolidation decisions. They wanted to know precisely how USPS took their input—letters, phone calls, public meeting results—into consideration when it made its decisions. We recommended that USPS increase the transparency of its decision-making process by explaining how it considers public input in the decision-making process. In a recent interview, senior USPS officials identified two additions to the 2008 AMP Communication Plan that address stakeholders’ concerns about how USPS considers public input. First, USPS considers written comments from stakeholders before the public input meetings and addresses these comments as part of the public input meetings. Second, USPS has modified its public input review process so that officials at the district, area, and headquarters levels consider, and are responsive to, public concerns. Senior USPS officials told us that they weigh public input primarily by considering the impact of any consolidations on customer services and service standards. Additionally, USPS officials told us that as AMP consolidations go forward, USPS will post standard information about each consolidation on its Web site and update this information regularly. Specifically, USPS plans to post initial notifications, a summary brief of the proposed AMP consolidation, specifics about the scheduled public meeting, a summary of written and verbal public input, and the final decision and implementation plans if an AMP consolidation is approved. Congress has also addressed USPS’s communication process. PAEA required USPS to describe its communication procedures related to AMP consolidations in its Network Plan. In response, the Network Plan discusses how USPS will publicly notify communities potentially affected by realignment changes and how it will obtain and consider public input. In addition, PAEA directed USPS to identify any statutory or regulatory obstacles that have prevented it from taking action to realign or consolidate facilities. Accordingly, USPS’s Network Plan identified delays related to implementing AMP consolidations. For example, USPS was directed not to implement certain consolidations until after GAO has reported to Congress on whether USPS has implemented GAO recommendations from its report issued in July 2007 to strengthen planning and accountability in USPS’s realignment efforts. These directions were included in the joint explanatory statement accompanying the Consolidated Appropriations Act for fiscal year 2008. We have previously discussed the difficulties that stakeholder resistance poses for USPS when it tries to close facilities and how delays may affect USPS’s ability to achieve its cost-reduction and efficiency goals. Part of the problem stemmed from USPS’s limited communication with the public. We believe that USPS has made significant progress toward improving its AMP communication processes since 2005. Now, it will be crucial for USPS, in going forward, to establish and maintain an ongoing and open dialogue with its various stakeholders, including congressional oversight committees and Members of Congress who have questions or are concerned about proposed realignment changes. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or Members of the Subcommittee may have. For further information about this statement, please contact Phillip Herr, Director, Physical Infrastructure Issues, at (202) 512-2834 or at herrp@gao.gov. Individuals making key contributions to this statement included Teresa Anderson, Kenneth John, Summer Lingard, Margaret McDavid, and Jaclyn Nidoh. Realignment of Airport Mail Centers (AMC) AMCs are postal facilities that have traditionally been operated for the purpose of expediting the transfer of mail to and from commercial passenger airlines. USPS’s Network Plan stated that USPS had terminated operations at 46 AMCs during fiscal years 2006 and 2007, and another 8 AMCs in fiscal year 2008. AMP consolidations of mail processing operations are intended to reduce costs and increase efficiency by eliminating excess capacity at USPS’s more than 400 processing plants. From 2005 through July 2008, USPS implemented 11 AMP consolidations, decided not to implement 35 studies (5 placed on indefinite hold), was continuing to consider 7 consolidations, and had closed 1 facility after consolidation. Because mailers have increased their sorting and transport of mail shipments to postal facilities near mail destinations, mailers have been bypassing BMCs and the centers are underused. Also, increased highway contract expenses and an aging postal distribution infrastructure have prompted USPS to evaluate its BMC network to determine how it can best support future postal operations. In July 2008, USPS issued a Request for Proposal to obtain input on a proposal to outsource some of its BMC workload so that USPS can use its 21 BMCs for alternative postal work. The Regional Distribution Centers were expected to perform bulk processing operations and act as Surface Transfer Centers and mailer entry points. The Network Plan stated that this initiative has been discontinued because USPS determined that it would not generate the benefits originally anticipated. GAO. U.S. Postal Service: Data Needed to Assess the Effectiveness of Outsourcing. GAO-08-787. Washington, D.C.: July 24, 2008. GAO. U.S. Postal Service: Progress Made in Implementing Mail Processing Realignment Efforts, but Better Integration and Performance Measurement Still Needed. GAO-07-1083T. Washington, D.C.: July 26, 2007. GAO. U.S. Postal Service: Mail Processing Realignment Efforts Under Way Need Better Integration and Explanation. GAO-07-717. Washington, D.C.: June 21, 2007. GAO. U.S. Postal Service: Delivery Performance Standards, Measurement, and Reporting Need Improvement. GAO-06-733. Washington, D.C.: July 27, 2006. GAO. U.S. Postal Service: The Service’s Strategy for Realigning Its Mail Processing Infrastructure Lacks Clarity, Criteria, and Accountability. GAO-05-261. Washington, D.C.: April 8, 2005. GAO. U.S. Postal Service: USPS Needs to Clearly Communicate How Postal Services May Be Affected by Its Retail Optimization Plans. GAO-04-803. Washington, D.C.: July 13, 2004. GAO. U.S. Postal Service: Bold Action Needed to Continue Progress on Postal Transformation. GAO-04-108T. Washington, D.C.: November 5, 2003. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO has issued reports on the U.S. Postal Service's (USPS) strategy for realigning its mail processing network and improving delivery performance information. These reports recommended that the Postmaster General (1) strengthen planning and the overall integration of its realignment efforts, and enhance accountability by establishing measurable targets and evaluating results, (2) improve delivery service standards and performance measures, and (3) improve communication with stakeholders by revising its Area Mail Processing (AMP) Communication Plan to improve public notice, engagement, and transparency. The 2006 postal reform act required USPS to develop a network plan by June 2008 that described its vision and strategy for realigning its network; the anticipated costs, cost savings, and other benefits of its realignment initiatives; performance measures for its delivery service standards, and its communication procedures for consolidating AMP operations. This testimony discusses USPS's actions toward addressing GAO recommendations to (1) strengthen network realignment planning and accountability, (2) improve delivery performance information, and (3) improve communication with stakeholders. This testimony is based on prior GAO work, a review of USPS's 2008 Network Plan and revised AMP Communication Plan, and updated information from USPS officials. USPS did not have comments on this testimony. USPS has taken steps to respond to most of GAO's prior recommendations to strengthen planning and accountability for its network realignment efforts. In its June 2008 Network Plan, USPS clarified how it makes realignment decisions, and generally addressed how it integrates its realignment initiatives. However, USPS has not established measurable performance targets for its realignment initiatives. USPS believes that its budgeting process accounts for the cost reductions achieved through these initiatives. The Deputy Postmaster General explained that such performance targets are captured in USPS's overall annual goal of achieving $1 billion in savings. While these measures are not as explicit or transparent as GAO had recommended, USPS is required to report annually by the end of December to Congress on, among other matters, its realignment costs and savings. Also, USPS's annual compliance reports to the Postal Regulatory Commission (PRC) will provide opportunities for further transparency of performance targets and results. USPS's Network Plan notes that to respond to declining mail volumes, USPS must increase efficiency and decrease costs across all its operations. Given USPS's challenging financial situation, effective implementation of network realignment is needed; and USPS's annual reports could help inform Congress about the effectiveness of its realignment efforts. USPS has partially responded to GAO's recommendations to improve its delivery performance standards, measurement, and reporting, but full implementation of performance measures and reporting is not yet completed. USPS established delivery performance standards in December 2007. USPS's Network Plan stated that USPS would develop targets and measures to assess performance against these standards by fiscal year 2009. In addition, USPS has recently submitted a proposal for measuring and reporting on delivery service performance to the PRC. The PRC has requested public comment on USPS's proposal, which depends upon USPS and mailers implementing new technology. Delivery service performance is a critical area that may be affected by the implementation of the realignment initiatives. USPS has also taken steps to address GAO's recommendations to improve communication with its stakeholders as it consolidates its AMP operations by modifying its Communication Plan to improve public notification and engagement, increasing transparency by clarifying its processes for addressing public comments, and making additional information available on its Web site. Going forward, it will be crucial that USPS establishes and maintains an ongoing and open dialogue with stakeholders, including congressional oversight committees and Members of Congress who have questions or are concerned about proposed realignment changes
In 1974, a class action suit filed in the U.S. District Court for the District of Columbia on behalf of individuals with mental illnesses alleged that the practice of treating the District’s mental health patients in an institutional setting violated the statutory rights of individuals. Specifically, the plaintiffs asserted that patients at St. Elizabeths Hospital had a statutory right to appropriate care in alternative care facilities when less restrictive settings were clinically appropriate. In a ruling known as the Dixon Decree, the court ruled in favor of the plaintiffs in 1975, ordered the District to build a system to facilitate the provision of community-based treatment for these individuals, and continued oversight of the District’s progress in developing this system. In 1997, finding that the District was no closer to complying with the Dixon Decree than it had been 22 years earlier, the court placed the D.C. Commission on Mental Health Services in receivership and appointed a receiver to implement the transition to a community-based mental health system. This receiver introduced initiatives that sought to change the way the District delivered services, but implementation was slow and the first receiver made little progress in implementing these initiatives during his 2-year oversight of the commission. Thus, a second or “transitional” receiver was appointed on April 1, 2000, to facilitate the transition from court receivership to District control. (App. I summarizes the major court actions related to the Dixon Decree.) The transitional receiver was charged with developing a comprehensive plan for the District to achieve compliance with the Dixon Decree and resume full control of its mental health system. The court approved a final plan in April 2001 and required the District to implement it; however, before the receivership could be ended, the court required the transitional receiver to certify that the District had the capacity to implement—and was implementing—the final plan. Although the court originally anticipated this certification in late 2001, in December 2001 the transitional receiver recommended extending the date, characterizing the implementation delay as largely unavoidable because of (1) additional time needed for recently hired senior DMH managers to begin major initiatives, and (2) the unexpected need for crisis services to respond to September 11, 2001, terrorist events. Following this extension, the transitional receiver reported to the court that the District had made sufficient progress and, as a result, the court terminated the receivership and appointed the former transitional receiver as a court monitor to oversee the District’s continued implementation of the final plan in May 2002. (See table 1.) When the transitional receiver was responsible for overseeing the District’s mental health system, the District was the largest provider of mental health services to its residents, treating approximately 10,000 consumers annually and employing close to 2,000 staff in fiscal year 2000. The focal point of the mental health system was St. Elizabeths Hospital, which was the major point of entry for all consumers in the system. St. Elizabeths Hospital provided a wide range of mental health services in an acute care setting, including more than 600 beds divided among two types of inpatient consumers, forensic and civil, for adults and children and youth. The District also directly provided services through outpatient facilities in the community, including two community mental health centers and five mobile community outreach treatment teams. In addition to providing inpatient and direct services in the community, the District contracted with private community providers for housing, employment, case management, and other community-based services. In its contracts with private community providers, the District often used a “slot” system to allocate a defined number of consumers to providers and paid them a fixed daily rate per consumer. Under this system, providers did not compete to attract consumers and were paid regardless of performance, consumer satisfaction, or the actual delivery of service. The District and its providers focused primarily on treating the medical symptoms of the consumer without focusing as much on whether the individual was participating in his or her recovery from mental illness and successfully living in the community. Furthermore, the system did not have many safeguards in place, such as uniform provider standards, to involve the consumer in key aspects of service delivery, such as choosing a provider and developing a treatment plan based on the consumer’s goals. The transitional receiver identified the need for a restructured mental health system that had the flexibility to meet individual needs and allow consumers to successfully obtain treatment and live in the community, maximizing principles of accessibility, recovery, and consumer choice. In 2001, the federal share of Medicaid, an entitlement program in which states and the federal government are obligated to pay for covered services provided to an eligible individual, accounted for 8 percent of District mental health system revenue as compared to the national average of 22 percent. The transitional receiver identified the need to better utilize Medicaid as a major funding source. The District’s access to Medicaid funds had been limited because Medicaid did not cover most of the services provided at St. Elizabeths Hospital, considered under the Medicaid statute as a larger psychiatric institution. This effect was exacerbated by the limited capacity in the developing community-based system to support inpatients ready for discharge. For example, in October 2000, District officials estimated that approximately 60 percent of individuals in acute care units at St. Elizabeths Hospital could be moved into the community where outpatient services covered by Medicaid would be available, if stable alternative housing were available. A second limit to the District’s accessing federal funds was that the District had not taken advantage of optional community-based mental health services that could be reimbursed through the Medicaid program. The transitional receiver required the District to implement a strategy adopted by at least 40 other states to expand the services reimbursable by Medicaid through an option to cover rehabilitative services, thus expanding the scope of eligible services and providers beyond that of the program’s traditional focus on services delivered by physicians and psychiatrists who work at hospitals, clinics, and other facilities. Rehabilitative services include crisis and emergency care, medication treatment, and community-based interventions. The variety of rehabilitative treatments and services covered by this Medicaid option is intended to facilitate a consumer’s recovery from mental illness, including restoring a consumer to his or her best possible functional level. The court-approved final plan broadly outlines the mental health system’s direction, philosophy, major roles, and governance. It represents a major shift in the District’s mental health system on several fronts, including the system’s structure and organization, method for enrolling consumers and paying providers, and involvement of consumers in their plan for recovery. For example, the final plan identifies the need to create a new mental health department with the additional responsibility of oversight along with continuing the District’s historic role as provider; envisions a significant change in enrollment and billing systems, such as linking payment to the delivery of services, and developing new funding strategies that increase federal reimbursement; and articulates that the new system have a built-in capacity to measure itself in key performance areas and to translate any findings into continued system improvements. Underpinning these structural changes is a refocusing of the mission of the District’s mental health system toward involving the consumer in treatment decisions and incorporating changes that facilitate the consumer’s recovery from mental illness and away from focusing primarily on treating the individual’s medical symptoms. The court also approved exit criteria for the Dixon lawsuit, which provide a basis for measuring the performance of the District’s mental health system and which must be met in order to end the Dixon case. The criteria cover four areas: 1. consumer satisfaction, which assesses consumers’ satisfaction with mental health services provided; 2. consumer functioning, which tracks consumers’ clinical, social, and other conditions upon entry into the mental health system and again after receiving services for a specified period of time; 3. consumer service delivery, which assesses the adequacy of the mental health system’s overall performance for consumers in a range of areas including treatment planning, coordination of care, and response to emergent and urgent needs; and 4. system performance, which demonstrates how well the community- based system of care is serving particular populations. The first two areas require DMH to develop and implement methods for reviewing and measuring consumer satisfaction and consumer functioning and to use the data to refine the system. To fulfill the remaining criteria, DMH is required to meet 17 performance targets, many of which measure activities identified as national best practices in the field of mental health. According to the court monitor, implementing the final plan, including developing the ability to measure DMH’s progress against the exit criteria, will take 3 to 5 years, with year 1 beginning July 1, 2001. In general, efforts for years 1 and 2 were expected to center on planning, laying the basic infrastructure for the system, and beginning to provide community- based services. By the end of year 3, which began October 1, 2003, DMH is expected to be stabilizing and improving performance within the system, and in years 4 and 5 DMH is expected to be actively measuring performance outcomes. (See table 2.) In addition to developing performance targets for the exit criteria, the court monitor is required to provide the court with semiannual reports on the District’s progress in meeting all of the exit criteria. The court monitor’s first two reports, submitted to the court in January 2003 and July 2003, respectively, focused primarily on DMH’s status in implementing the final plan and also included an update on the status of meeting the exit criteria to end the Dixon case. In accord with the transitional receiver’s final plan, the District restructured its mental health system by creating DMH to oversee the provision of mental health services, including the authority to set regulations and monitor compliance—a shift away from the structure of its predecessor office, which was primarily a provider of services. Under this structure, DMH also continues the District’s historic role as a provider of mental health services. In its oversight role, DMH has developed certification standards and made use of licensing standards to enroll a network of providers to deliver an array of mental health services, which DMH continues to expand to ensure adequate capacity for community- based mental health services. DMH is in the early stages of implementing its new monitoring framework to ensure that services are complying with existing and newly established quality and safety standards. DMH remains the largest provider of community-based services and continues to provide inpatient mental health care for the District at St. Elizabeths Hospital. In 2001 the District took the first step toward implementing the final plan by passing legislation establishing DMH and giving it new oversight responsibilities, including setting regulations and monitoring community- based provider compliance. The significant organizational change accompanying the addition of oversight responsibilities required hiring new leadership and redeploying and retraining a large portion of existing staff. For example, of DMH’s 270 administrative and oversight staff positions, which represent approximately 14 percent of all budgeted staff for fiscal year 2003, the majority of positions were new and required either redeployment of existing staff or hiring new staff. Consistent with the final plan, DMH established a training institute to provide staff training and development, among other services. As of December 2001, a court report indicated that key leadership positions had been filled, including that of the director of DMH, who was hired by the mayor in April 2001. Subsequently, however, one key leadership position, DMH’s chief financial officer, experienced turnover, with four individuals serving in the role since April 2001. DMH has also hired two chief executive officers with experience in other systems undergoing reform, to run its community- based services agency and St. Elizabeths Hospital, respectively. (See table 3 for a summary of DMH’s functional responsibilities, including oversight, by office.) DMH became the primary entity for overseeing a mental health system that is focused on community-based systems of care. (See table 4.) DMH’s regulatory responsibilities include developing standards and certifying providers of services, such as rehabilitative services and supported housing at independent living facilities, and licensing community residential facilities. As of January 2004, DMH had certified 22 mental health rehabilitative services providers, licensed more than 148 community residential facilities, and was in the process of implementing a certification program to oversee more than 400 supported independent living facilities. DMH addressed rehabilitative services standards by developing and publishing specific provider certification standards that took effect on November 9, 2001. In addition to its regulatory responsibilities, DMH must monitor providers’ compliance with existing and newly developed quality and safety standards. DMH’s oversight division, the Office of Accountability, has direct responsibility for monitoring compliance with standards. DMH has developed a monitoring framework that is in the early stages of implementation, with DMH beginning to use information from some monitoring efforts to assess provider compliance and continuing to adjust other efforts. The following are examples of DMH monitoring efforts: Safety inspections, which are surveys of the sites where licensed providers offer services, are used to ensure health and safety standards are met. In the first 11 months of 2003, DMH conducted at least 150 inspections of 148 eligible facilities. When DMH conducts site inspections, it can issue notices of infractions for violations of the standards. According to DMH, from April 2002 through January 2004, it issued 46 notices to 22 providers and issued more than $29,000 in fines for identified deficiencies, including items such as insufficient staff on duty, failure to report unusual incidents, inaccurate personnel records, and exceeding maximum capacity. Increasing the number of site inspections of facilities that serve DMH consumers is one of the goals included in the DMH annual “scorecard” submitted to the District Mayor’s office, which tracks commitments and deadlines set for DMH. Provider audits, which are record reviews of certified rehabilitative services providers, are used to analyze trends across providers and to ensure that providers are meeting documentation and service standards. In January 2003, DMH completed its first round of audits for the 12 providers certified at that time. As expected by DMH for the first year of applying standards, the audit found that providers were not in compliance with certain documentation requirements, such as having the approving practitioner sign the authorized treatment plan, and, as a result, all 12 providers were to implement corrective action plans. While these initial audits focused solely on provider documentation compliance, the second round of audits of all certified providers, which DMH expects to complete in early 2004, will examine how well specific services (such as medication treatment) are being provided. Routine, biennial recertification reviews for rehabilitative services providers, which include evaluations of recorded complaints, audits, and public comment, are used to ensure that individual providers are complying with certification standards. With the first round of recertification applications, begun in December 2003, DMH will be able to use data from these reviews to make decisions regarding providers’ recertifications. Investigations of unusual incidents, which are conducted by the Office of Accountability and providers, are used to ensure consumer safety and reduce the occurrence of future incidents. DMH is expected to investigate any major unusual incident, such as consumer deaths, adverse drug reactions, and allegations of abuse or neglect. Providers are expected to investigate other, less serious incidents, defined as any events that occur outside the normal routine of care, and they are required to report to DMH all unusual incidents and action taken to respond to them. Unusual incidents, which vary widely in severity, were reported 1,259 times in calendar year 2003, including 336 reports of major unusual incidents. Of the 1,259 unusual incidents reported for 2003, DMH resolved 528 cases, including 161 major unusual incident cases. The remaining 731 cases usually required additional information from providers or other District agency investigators before DMH could take action. According to a DMH official, on average, a case remains pending for between 30 and 90 days before a disposition is reached. Through DMH, the District remains a direct provider of a significant portion of mental health services. DMH’s own community services agency is the largest provider of community-based services in the District, acting as the primary provider for 55 percent of all consumers enrolled in the District mental heath system as of October 2003. In addition, it is the sole provider of a number of services, including crisis response services for adult consumers through its Comprehensive Psychiatric Emergency Program and free pharmacy services for uninsured consumers. The number of consumers receiving community-based services directly from DMH grew from 4,191 in October 2002 to 6,971 in October 2003. In addition, the total number of consumers served by the 13 other community-based providers increased from 2,612 in October 2002 to 5,631 in October 2003. As envisioned by the transitional receiver’s final plan, DMH has also taken steps to reduce the number of beds at St. Elizabeths Hospital, but reductions have been limited by the lack of community-based services and agreements with community hospitals for acute care. The intent of the plan was for St. Elizabeths Hospital to be primarily a forensic hospital and a safety net facility for the community-based system of services and for community hospitals. While neither the final plan nor the exit criteria for the Dixon Decree specify goals for the reduction in the bed census at St. Elizabeths Hospital as a condition of ending the Dixon case, the exit criteria specify that 60 percent of DMH’s annual expenditures must be directed to community-based services. In DMH’s 2004 proposed budget, 41 percent of funds, approximately $80 million, are allocated for community- based providers and 42 percent, approximately $81 million, are allocated for St. Elizabeths Hospital. The remaining 17 percent, approximately $34 million, are budgeted for administration, oversight, delivery systems management, and other direct service costs, some of which represent fixed costs for community-based services. DMH has decreased the number of occupied beds at St. Elizabeths Hospital—from 628 beds in October 2000 to 513 beds in October 2003. In July 2003, the court monitor reported that the current model of continued reliance on St. Elizabeths Hospital was not financially viable, did not promote the concept of community-integrated care, and was not in compliance with the court- ordered plan. However, DMH stated that the hospital’s budget cannot be reduced without an additional decrease in the number of occupied beds. The chief executive officer of St. Elizabeths Hospital said that the census would not decrease until the community can support patients upon discharge, including providing access to affordable housing. The court monitor estimates that for the community-based system to adequately meet the needs of District residents, DMH would have to double the current capacity. In its first 2 years, DMH developed and implemented a comprehensive enrollment and billing system that coordinates clinical, administrative, and financial processes. Two key attributes of this system that were described in the final plan are that it (1) links payment with planning for individual treatment and the provision of services and (2) increases access to federal funds through the development of mental health rehabilitative services, which are community-based mental health services that a state’s Medicaid program can choose to provide. DMH has developed and implemented a system to link payment to authorized treatment plans, enroll consumers, reimburse providers, and bill Medicaid for rehabilitative services provided. However, moving to an FFS billing system for services has resulted in difficult adjustments, including managing cash flow, for some DMH providers. DMH’s enrollment and billing system that links payment to treatment, as envisioned by the final plan, is in place and operating. Consumers can enter into the mental health system through a variety of points in the community, including calling DMH’s Access Helpline, visiting a DMH- certified community-based service provider, receiving treatment in hospitals or emergency rooms, and receiving mental health assistance through other DMH outreach efforts. All District residents needing mental health services are eligible to receive them regardless of insurance coverage. The Access Helpline—which is a telephone hotline that provides crisis emergency services, enrollment assistance, and information and referral 24 hours a day, 7 days a week—or a certified CSA—which is responsible for acting as a clinical home and therefore assessing consumer needs and coordinating care—will enroll eligible consumers within 3 days of initial contact. When enrolling in the system, the consumer chooses a CSA as a clinical home based on a number of preferences such as location and treatment specialties. (See fig. 1.) After choosing the CSA, a consumer meets with a clinical manager to develop a treatment plan, which includes objectives and a plan of services, called an individualized recovery plan for adults and an individualized plan of care for children and youth. Once a clinical manager and a consumer develop a treatment plan, it is submitted by the CSA to DMH for authorization. Upon authorization of the treatment plan, a consumer can begin accessing the approved services. These services must be provided by a CSA or by another DMH-certified provider; once services are delivered, the providers then bill DMH on an FFS basis for reimbursement. Screening consumers for eligibility to receive mental health services and billing DMH for services rendered are new responsibilities for providers. Providers will be paid only for services delivered that are identified by the treatment plan and authorized by DMH. As of December 2003, DMH had transitioned 12 of its 27 community-based services to the FFS enrollment and billing system, including all nine rehabilitative services, but 15 other services, such as consumer advocacy and peer support, had yet to be added. Services that have not been transitioned to the FFS system do not have to be identified in an authorized treatment plan; however, community-based providers must deliver these services according to their contractual agreements with DMH. In order to develop a system that links payment to services provided, DMH purchased management information systems that coordinate clinical, administrative, and financial processes for mental health services. These systems allow CSAs to enroll consumers in the mental health system, submit claims electronically, and retrieve their consumers’ demographic data. These systems also streamline DMH’s administrative efforts by allowing DMH to electronically enroll consumers, authorize services, adjudicate claims, and generate payment reports for providers. The system further helps DMH monitor how much individual providers are billing, which helps DMH project expenditures. DMH received the first batches of claims in June and July 2002, and as of October 2003 it reported that its mental health system had 12,602 consumers enrolled. However, DMH could not report the number of consumers who received services within a 90-day period, which is consistent with the court’s definition of provision of services to enrolled consumers. As of January 2004, DMH had paid rehabilitative services providers $30.4 million for claims submitted in fiscal year 2003. DMH projects that it will have paid these providers a total of $35 million to $40 million for claims submitted in fiscal year 2003. In December 2001, the Centers for Medicare & Medicaid Services approved the District’s request to add the mental health rehabilitation services option to its Medicaid program. (See table 5.) Approval of the option increased both the number and scope of mental health services reimbursable by Medicaid. Under the option, DMH certifies and contracts with community providers to deliver covered services. DMH pays providers for any DMH-authorized service and, on behalf of contracted providers, files claims with the District Medicaid office for reimbursement of the federal share of the cost of Medicaid-covered services. Thus, there is no relationship between the District Medicaid office and the local providers for these services, nor is payment to providers contingent upon reimbursement by Medicaid. Other District community-based service providers that do not contract with DMH bill the District Medicaid office directly for their services. DMH built mechanisms into the enrollment and billing processes to help providers and DMH work together to obtain Medicaid reimbursement. Access Helpline counselors work with providers to identify consumers who are eligible and enrolled in the Medicaid program using eligibility data from the District Medicaid office. Before transmitting Medicaid- reimbursable claims to the District’s Medicaid office, DMH checks each claim to ensure that the consumer is currently enrolled in Medicaid, that the provider is eligible, and that the covered service has been paid by DMH. Upon submittal for reimbursement to the District’s Medicaid office, DMH tracks the status of claims, receiving reports that detail the claims paid, waiting to be paid, and denied payment. The report also provides reasons that claims were denied. DMH is improving its overall enrollment and billing system to decrease the time providers spend on administration and to increase the time they spend serving consumers. For example, in October 2003, DMH changed a component of the billing system that delayed providers from offering services. The system had required providers to electronically update treatment plans every 90 days. To reinforce this requirement, the information system prevented the provider from entering any other consumer data, such as claims data for a service provided, until the plan was updated. DMH realized that requiring providers to do this was burdensome and prevented them from serving consumers. As a result, DMH removed the requirement to update the treatment plan from the electronic billing system and is monitoring compliance with the 90-day requirement through an alternative mechanism. DMH projects that as the enrollment and billing system improves and the provision of community-based services continues to expand, mental health rehabilitative services will eventually generate approximately $36 million to $38 million annually in federal Medicaid funds. As of November 2003, the District’s Medicaid office had reimbursed DMH $17.5 million for fiscal year 2003—over 50 percent of the amount DMH paid to providers for rehabilitative services. As one condition of ending the Dixon case, federal Medicaid funds must cover at least 49 percent of all mental health rehabilitative services provided. Although DMH expects future growth in Medicaid revenue, many individuals served by the District’s mental health system, especially adults, are not eligible for Medicaid. According to DMH officials, moving to an FFS system represented a major change in business operations for DMH providers and has presented challenges for them; however, DMH has offered assistance to all certified rehabilitative providers. DMH offered training for providers on service and billing requirements and grants for building the infrastructure required to participate in the system. In addition, consultants funded by DMH can work with providers on developing sound business practices, including cash flow analysis, budgeting in an FFS environment, staff assignments and productivity, record keeping, and billing. Even with assistance, providers experienced challenges since beginning to bill DMH on an FFS basis. Two providers reported that there are considerable investments of time and money necessary to be certified as a CSA. According to one provider, the new system requires more “business savvy” and planning by providers for revenue peaks and valleys because providers are no longer guaranteed revenue regardless of the level of services provided. Thus, as stated by the same provider, they must plan ahead to ensure they can meet payroll in months like December and February, when fewer consumers seek services because of holidays and winter weather. Problems managing cash flow were exacerbated because provider contracts with DMH were tied to the billing projections, which meant that DMH could not pay claims for providers who exceeded their projections until their contracts were changed. The Mental Health Coalition, whose members are primarily DMH-certified providers, wrote to DMH several times in fiscal year 2003 listing a number of concerns with the billing process, and its primary concern was the lack of timely payment on a consistent basis. By August 2003, DMH made the necessary contract changes to allow providers to be paid for the remainder of the fiscal year and, according to senior officials, had a plan in process for fiscal year 2004 to prevent this problem from recurring. DMH provided data showing that in fiscal year 2003 it adjudicated—that is, made a decision to pay or deny—79 percent of submitted claims within 30 days; however, after adjudication, the District of Columbia Treasury must then pay the approved claims, which, according to DMH, took an average of 15 additional days. The court monitor has identified claims payment as an area of concern that will continue to be monitored. DMH did not provide the court monitor with a measure of timely reimbursement in 2003, but, according to the court monitor, in fiscal year 2004 DMH will be required to report the percentage of claims being paid within 30 days of submission. Also central to DMH’s new mental health system is facilitating consumers’ participation in their recovery from mental illness, an approach that is consistent with the final plan, as well as national trends. Consistent with this focus, DMH has established requirements in two key areas, consumer choice and consumer protection. With regard to consumer choice, DMH has requirements in place to ensure that consumers participate in the selection and receipt of services. However, DMH’s initial review of rehabilitative services provider records showed gaps in documentation of consumer participation, such as a lack of documentation of the consumers’ participation in—and agreement with—their treatment plans for 41 percent of the records reviewed. DMH is addressing these gaps with providers to ensure that their practices comply with these requirements and adequately involve consumers in their treatment. Consumer protection policies are also evolving, with DMH publishing a uniform consumer grievance policy in October 2003. DMH officials emphasized that moving to a consumer-focused model is a long-term change that will take place gradually. Consumers entering the District’s mental health system are faced with important choices that help shape the provision of care they receive, including the choice of a CSA as a clinical home that will provide and coordinate care, choice of other DMH-certified providers, and choice of services through involvement in treatment planning. As part of the enrollment process, both the CSA and the Access Helpline are required to present consumers with the option to select any DMH-certified CSA to serve as the clinical home, a choice typically made based on their preferences, such as location and treatment specialties provided. Every CSA that serves as a consumer’s clinical home is required by DMH’s certification standards to have a policy in place to inform consumers about these and other choices available to them. For example, each CSA’s consumer choice policy must also inform consumers about the availability of peer and family support services—such as transportation, education, nutrition services, and recreation activities—as well as how to access the services. DMH’s certification standards also require CSAs to coordinate the treatment planning process for their consumers and to document consumer participation. For example, CSAs are required to develop a diagnostic assessment and treatment plan for each consumer that follows the consumer throughout the service delivery and reimbursement systems. Each CSA acting as a clinical home is required to obtain a consumer’s written consent to treatment as well as provide all consumers with a statement outlining their rights and responsibilities during the enrollment and treatment process. To assist consumers in obtaining mental health services, the Director of DMH’s Office of Consumer and Family Affairs (OCFA) told us that DMH employs 15 to 20 mental health consumers as enrollment specialists who are available to other consumers as a resource in making these choices. DMH also offers training, some of which is conducted by other mental health consumers, that is available to consumers and their families on selecting providers and planning treatment. In addition, DMH’s enrollment handbook for new consumers summarizes aspects relating to the enrollment process, such as the types of mental health services available, range of consumer choices, and activities a consumer can expect during enrollment. Intended for use in the second quarter of 2004, DMH is developing a provider report card that contains specific information about each rehabilitative services provider to better facilitate consumer choice. For example, the provider report card will give providers a numerical score in areas, such as consumer access, billing and claims, and consumer complaints, that would enhance the consumers’ basis for selecting a provider. Finally, OCFA is also responsible for overseeing the development and implementation of the consumer satisfaction review required in the Dixon exit criteria, an initiative that DMH envisions as expanding the role of consumers in measuring the quality of services they receive in the District’s mental health system. The court monitor and District mental health advocates have highlighted areas relating to consumer choice that need attention and that are consistent with DMH’s plans for additional development. In a January 2003 report to the court, the court monitor recommended that DMH develop a system for tracking consumer choice to help determine whether choices truly are available. The Director of OCFA told us that DMH would begin addressing this issue by identifying concerns relating to choice through consumer focus groups planned for each CSA in 2004. In addition, University Legal Services, the designated protection and advocacy program for the District, told us that consumers do not have enough information about how to access providers in the mental health system, and therefore it has published its own consumer rights manual. For example, an official with this organization told us that District consumers often do not have a choice among the full range of providers because many CSAs have limited capacity and have had to develop waiting lists. University Legal Services also cited a delay for consumers in receiving community-based services who are discharged from St. Elizabeths Hospital. While DMH is not required to report current baseline data regarding the receipt of community-based services for consumers following a hospital discharge, one condition for ending the Dixon case will be to demonstrate that 80 percent of known discharged inpatients receive services in a non-emergency, community-based setting within 7 days of a hospital discharge. DMH’s initial audits of documentation practices of each of its certified rehabilitative services providers showed gaps in documentation of consumer participation in development of their treatment plans. Of the 740 unique consumer records DMH reviewed in its audit completed in January 2003, 38 percent did not have a consumer’s signature on the treatment plan and 41 percent did not document the consumer’s participation in and agreement with the treatment plan. Each of the 12 providers reviewed by DMH was asked to develop a self-audit program and implement staff training to address areas of deficiency in the audits, which, according to DMH, were to be expected in the first year of applying provider standards. Concerns raised by other stakeholders were consistent with the results of DMH’s audits of provider documentation practices. For example, in a July 2003 letter to DMH, University Legal Services noted systemic problems with treatment plans relating to consumer participation and accuracy, such as being unsigned, lacking consumer preferences, and failing to reflect consumer medical needs. DMH’s written response to University Legal Services highlighted the provider documentation audits completed by DMH as evidence that the department is identifying treatment plan issues but acknowledged that these problems will take time to resolve. In October 2003, DMH published a consumer grievance policy, required by the legislation creating DMH, which strengthened the basic consumer protection provisions in DMH’s provider certification standards. Prior to publication of this policy, CSAs and other mental health providers were required to establish written complaint and grievance policies and procedures but did not have to include specific criteria consistent with an overall and uniform DMH policy. For example, the DMH policy published in October 2003 required providers to review, investigate, and respond within 5 business days to grievances alleging abuse or neglect or denial of a service. While consumers can continue to file grievances with CSAs or DMH, the new policy also specifically outlines the conditions under which consumers can request an external review of a grievance that can result in a fact-finding hearing or mediation process. The new policy also requires DMH to facilitate and fund peer advocacy programs that are independent of providers to assist consumers throughout the grievance process. In addition, providers are required to take specific steps to increase consumer awareness about their grievance policies, such as posting the various options and procedures for filing a grievance and documenting that the consumer received a copy of the provider’s policy. DMH’s monitoring of consumer complaints and grievances is also evolving. As of January 2004, DMH had contracted with an organization to create a database that will allow OCFA to track consumer grievances and identify systemic issues. OCFA expects that the database will be developed in the first few months of 2004. The new grievance policy also specifies that DMH will periodically review the implementation of the provider policies and publish a semiannual report on the types and dispositions of all grievances filed as well as highlight noteworthy trends, patterns, and other statistical information. Prior to this policy, DMH could not ensure that grievances were being tracked and did not review the extent to which providers were implementing their grievance procedures. The court monitor worked with DMH and others to develop performance targets to measure compliance with the Dixon exit criteria. On December 11, 2003, the court approved qualitative requirements for two exit criteria measures relating to consumer satisfaction with services and level of functioning. In addition, the court approved 17 performance targets for 17 exit criteria measures relating to system performance. Although the court monitor envisioned fiscal years 2004 and 2005 as the appropriate time frame for DMH to both measure and improve its performance, DMH faces major challenges to collecting and verifying the accuracy of the performance data, including developing methods to electronically collect the data, correcting known data deficiencies, and working with providers to submit accurate data. In working to measure the District’s compliance with the exit criteria, the court monitor, in conjunction with an outside expert and the legal parties to the Dixon case, developed two qualitative requirements and 17 performance targets, which were approved by the court in December 2003. The qualitative requirements address two of the exit criteria measures—consumer functioning and consumer satisfaction. For these two measures, DMH is required to develop and implement consumer satisfaction and functioning review methods and begin using the data obtained by these methods to make refinements to service delivery. DMH has contracted with a consumer organization to build a consumer satisfaction initiative patterned after model programs around the country. As of December 2003, OCFA had conducted a telephone survey of consumers to help DMH develop this consumer satisfaction review. In addition, DMH officials told us that they are testing the effectiveness of a tool for assessing consumer functioning. According to the court monitor, DMH will provide a progress report in early 2004 on the status of these two reviews, but is not likely to submit the methodologies to the court monitor—which is required to comply with the exit criteria—for several more months. The court also approved 17 exit criteria measures, each with a specific performance target. (See table 6.) Two of the 17 measures articulate overall system performance targets that DMH must meet in annual reviews of the services provided to adult and child and youth consumers. For example, DMH’s system must perform positively for 80 percent of the adults who are sampled and reviewed. The remaining 15 measures define specific system performance targets that DMH must meet in the aggregate for 4 consecutive quarters, such as demonstrating the timely receipt of supported housing services for a specific percentage of persons referred to supported housing. Once DMH meets these targets for the specified time frame, the court monitor ends active monitoring of the measure. However, according to the court order, DMH is required to continue to submit data to the court monitor for all exit criteria measures regardless of their monitoring status, giving the court the ability to require that DMH meet the performance targets for any exit criteria measure showing a substantial drop in performance. The Dixon case can be dismissed when the court monitor submits a report to the court affirming that the District has achieved compliance with all required performance targets and qualitative requirements for all of the exit criteria, and the court accepts that finding. Originally, the court expected the proposed performance targets submitted by the court monitor to be accompanied by baseline measures of performance. The proposal approved by the court in December 2003, however, did not include previous requirements for DMH to submit baseline measurement data along with the performance targets. According to the court monitor and a DMH official, baseline data were omitted because (1) historical data are generally incomplete because of problems with data systems as well as a general lack of reliable and consistent previous data, and (2) many of the performance targets require information that was not collected by DMH and its providers, such as the number of consumers referred to supported housing. In commenting on a draft of this report, DMH noted that it was unable to identify comparable baselines from other jurisdictions. Meeting the exit criteria performance targets, and thus ending the Dixon case, is a multiyear effort that requires DMH to develop and carry out a plan that will satisfy the court on three levels: (1) developing policies and practices that address the requirements of the exit criteria and demonstrating that DMH monitors the extent to which these policies are implemented, (2) developing specific methods for DMH’s collection and verification of the accuracy of the data, and (3) meeting the required performance targets for one full year as defined by the court. In November 2003, the court monitor anticipated that reviews relating to the first two requirements—policies and procedures and data collection and verification methods—will start in early 2004, but it may be a year before these two requirements are met for all of the exit criteria measures. The court monitor expects that DMH will concurrently develop and implement a plan to measure performance on all three levels that will allow the department to begin generating valid performance data in 2004. Although DMH began to collect data in July 2003 for some of the exit criteria measures based on the earlier methodologies approved by the court in May 2002, DMH officials told us in November 2003 that this data collection was preliminary and that they would not begin to develop a specific plan for meeting these three requirements until the court approved the final performance targets, which occurred in December 2003. Satisfying the court regarding DMH’s demonstration of specific methods for collecting and verifying the accuracy of the performance data is likely to be challenging because of impediments to data collection as well as the fact that collected data may be incomplete or inaccurate. DMH and its providers face three major obstacles in collecting accurate data used to meet the actual performance targets: (1) establishing methods to collect electronic data, (2) correcting known data deficiencies, and (3) ensuring the accuracy of information collected and reported by providers. A description of each of these challenges follows. Although the final exit criteria measures and performance targets were not approved until December 2003, DMH began collecting monthly data nonelectronically for 8 of the 17 exit criteria measures from providers in July 2003. For example, mental health rehabilitative services providers submit nonelectronic monthly reports to DMH on services provided to homeless consumers who are diagnosed with a serious mental illness. However, because the court approved revisions to some of the exit criteria measures in December 2003, providers will have to refine some of the information that they collect and report to DMH. In addition, the performance targets themselves, which did not exist prior to December 2003, will also affect the types of data collected. DMH officials told us that the department may be able to modify its enrollment and billing information system to collect some—but not all—of the data for the exit criteria measures, thus developing a central repository of information is still under discussion between DMH and the court monitor. Beyond this, a related issue will be developing the capacity to appropriately factor in other data currently collected by DMH in a way that is not duplicative of the monthly data submitted by mental health rehabilitative services providers. For example, officials told us that DMH’s school-based services program collects information that could be used as part of the calculation to meet the performance target requiring 75 percent of children and youth with serious emotional disturbances to receive services in a natural setting such as the home or school. However, the information collected through this program is not consumer-specific, nor is it linked to DMH’s enrollment and billing information system, which may, according to DMH officials, eventually be the primary mechanism for collecting data on many of the performance targets. As part of the exit criteria requirements for the Dixon case, DMH conducted an initial consumer services review in the spring of 2003 that identified two major service provision gaps relating to services provided to children and youth that need to be addressed to ensure the accuracy of the performance target data collected by DMH. The court monitor’s semiannual reports to the court have similarly highlighted these findings as areas requiring action. First, the review showed that many of the children and youth placed in residential treatment centers (RTC) do not have a clinical home at a CSA as intended and thus are not receiving DMH services. In addition to raising concerns about the coordination between DMH and RTCs, the lack of services for these individuals could also affect the accuracy of the data collected by DMH to meet a performance target that requires DMH to demonstrate that 85 percent of children and youth with serious emotional disturbances served by the system are living in their own or surrogate homes. Second, according to the court monitor, the consumer services review also revealed a significant gap between the number of children and youth enrolled in DMH’s system and the number who are actually receiving services. The court monitor’s report acknowledged that the source of this gap, while unknown, could reflect flaws in DMH’s data management system, its disenrollment policy, or clinical standards, such as required follow-up with consumers who have missed an appointment. Since the four penetration rate performance targets are calculated using the number of enrolled consumers who received at least one service in the past quarter, DMH will need to determine the cause of this gap to ensure that its performance data are accurate. As of March 2004, DMH had not provided us with the number of consumers who were enrolled and receiving services within a 90-day period. Beginning in July 2003, DMH began collecting unaudited monthly data from mental health rehabilitative services providers for a range of exit criteria measures, including the provision of supported housing and supported employment. The department has also begun to collect preliminary discharge data from St. Elizabeths Hospital and local hospitals providing acute care to mental health care consumers. However, as of November 2003, neither the mental health rehabilitative services providers nor the local hospitals were required to track this information, and DMH did not have processes in place for verifying the accuracy of these data. DMH’s Director told us in December 2003 that DMH is planning to incorporate reporting requirements as part of the recertification process for mental health rehabilitative services providers. As of January 2004, DMH was planning to collect quarterly discharge data from local hospitals but was still working out the details. Until these reporting processes are put in place, DMH will continue to collect discharge data from a combination of St. Elizabeths Hospital (for adults) and mental health rehabilitative services providers (for children and youth). Even after hospital reporting processes are implemented, the court monitor and DMH expect some difficulties in collecting comprehensive discharge data. For example, a DMH consumer may seek care in a local hospital that does not typically serve DMH consumers and thus does not provide quarterly data to DMH. Recognizing potential challenges in collecting data from local hospitals, the court monitor proposed—and the court approved—as one of the 17 performance targets, a continuity of care performance target that allows DMH to limit its measurement against this exit criteria measure to “known” inpatient hospital discharges. The court monitor expects DMH to include in its plan specific strategies for obtaining and verifying the accuracy of these data. The potential lack of accurate data—for example, from local hospitals—may mean that some discharged individuals are not factored into the data used to measure performance. In addition, the lack of consumer-specific data collected by DMH to meet the performance targets will also be a challenge. For example, because mental health rehabilitative services providers do not submit the names of each homeless consumer served, just the total number of these individuals served, information submitted is not consumer-specific and thus may be duplicative, compromising the accuracy of the measurement. The court monitor has also told us that DMH will need to verify that the performance target data are unduplicated. DMH provided written comments on a draft of our report. DMH’s comments are included, with our detailed responses, in appendix II. The court monitor provided technical comments, which we incorporated as appropriate. In its comments, DMH stated that the court-ordered plan for the reform of the District’s mental health system envisioned comprehensive and sweeping reforms, noting that accomplishing such reforms would result in over 50 percent of DMH’s budget being redirected in a 5-year period. DMH described six broad changes to the District’s mental health system in the court-ordered implementation plan. These changes included (1) implementing a mental health authority, (2) instituting systems of care, (3) developing a new set of accountability functions and changing the oversight and monitoring of mental health services, (4) incorporating consumer protections, (5) shifting the methods and operations for financing the delivery of inpatient and outpatient mental health services, and (6) creating a new Department of Mental Health with new responsibilities for operating within the city government. In addition, DMH stated that in spite of the District’s failure to meaningfully participate in the last 20 years of mental health reform, DMH is moving aggressively to become a positive contributor to the health and well being of the community and to persons in priority service groups. DMH commented that the draft report addressed several issues in depth while overlooking other reforms prominent in the final plan and the legislation creating DMH and other services such as Assertive Community Treatment, Supported Employment, and Supported Housing. The scope of our work was the status of DMH’s efforts to establish a community-based system of mental health care, focusing on four key areas of reform central to meeting the exit criteria for the Dixon Decree. While we believe that the other reform initiatives and services are important, we believe that DMH’s status with regard to meeting the exit criteria is an appropriate gauge of compliance with the Dixon Decree. We believe that making a comprehensive assessment of the system’s performance before DMH begins reporting on the exit criteria is premature. DMH also provided specific comments that clarified, updated, or added information regarding its status in implementing the final plan (see app. II). Where appropriate, we incorporated these changes into the report. As agreed with your office, we plan no further distribution of this report until 30 days from its date of issue, unless you publicly announce its contents. At that time, we will send copies of this report to the Director of the District of Columbia Department of Mental Health. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please call me at (202) 512-7118 or Carolyn Yocom at (202) 512-4931 if you have questions about this report. Major contributors to this report are listed in appendix III. The court determined that the District and the federal government had a joint responsibility to provide the plaintiffs community-based treatment in the least restrictive conditions. This ruling is known as the Dixon Decree. To comply with the court order, the involved parties drafted a final implementation plan that generally required an assessment of the plaintiff class members and periodic reports on progress in establishing a community-based system. Congress passed legislation that required the District to complete implementation of an integrated coordinated mental health system by October 1, 1991. Congress transferred sole responsibility of establishing the required local mental health services to residents of the District. The transfer was not effective until October 1, 1987. The court determined that no progress had been made to comply with the final implementation plan. The involved parties therefore developed a second plan. This plan is known as the service development plan. The court appointed a special master to oversee implementation of the service development plan. The court determined that the District was still unable to comply with the terms of the service development plan. As a result, the involved parties negotiated a third plan, whose goals the District met. This plan is referred to as the Phase I agreement. The parties negotiated and began to implement a fourth plan, which was significantly broader in scope and required activities such as hiring personnel and developing a homeless service plan. This plan is referred to as the Phase II plan. The District admitted noncompliance with the fourth plan, and the plaintiffs requested the appointment of a receiver. On September 10, the court appointed a receiver on the basis that only a receiver provides the court with enough day-to- day authority to force compliance without causing confusion and ambiguity in the administration of the commission. On March 6, with agreement of all parties, a new receiver, referred to by the court as a transitional receiver, was appointed and officially assumed this role on April 1. The transitional receiver was scheduled to return control of the mental health system to the District between January 1 and April 1, 2001. On April 2, the court approved the transitional receiver’s final plan and required the District to implement it. The plan provided a policy framework for meeting the Dixon mandate, including developmental milestones but not specific service targets. On May 22, the court found that the District was capable of implementing and was in fact implementing the final plan and thus terminated the receivership. The order also appointed the former transitional receiver as court monitor of District compliance with the final plan, and it approved exit criteria agreed upon by all parties. The monitor was directed to report to the court and the parties no less frequently than every 6 months. On December 11, the court approved a revised set of exit criteria, which replaced the criteria approved in May 2002, with measurement methodologies, definitions, performance targets and qualitative requirements. In addition, the court ordered that the case would be dismissed when (1) the court monitor affirms that the District has complied with all of the performance targets and qualitative requirements and the court accepts that finding; or (2) the District moves for dismissal and demonstrates “substantial” compliance with the performance targets and qualitative requirements and the court determines the case should be dismissed. The following is our response to DMH’s letter dated March 18, 2004. Our responses below correspond to the comments numbered in the margin of DMH’s letter. 1. DMH commented that the draft report references only a portion of the requirements of the Mental Health Reform Act and the court-ordered plan and does not discuss the development of “systems of care,” which it characterizes as cornerstones of the court-ordered plan and the legislation creating DMH. We believe that the report adequately characterized the immensity of the tasks faced by DMH. The scope of this report encompassed the actions taken by DMH since its creation to comply with the Dixon Decree. As such, we reported on the status of the District’s effort to establish a community-based system of mental health care, with a focus on four key areas of reform that were confirmed by the court monitor to be central to compliance with the Dixon Decree. Because many of the services and initiatives under way were still evolving and had incomplete data at the time of our work, we did not believe that a comprehensive assessment of DMH’s progress on all activities was appropriate. As a result, we focused on the data collection methods for the 17 performance targets relating to the District’s compliance with the court’s exit criteria. 2. We modified the report where appropriate to address information about the additional consumer protections, the number of supportive housing arrangements, the relocation of acute care beds, the hiring of senior managers, the status of leadership positions at DMH, the increases in service provision at the public CSA, the difficulty of adding new psychiatric services in local hospitals, and the results of provider audits. 3. DMH stated that the draft report did not accurately portray payments made to providers in fiscal year 2003 and that our findings on slow payments to providers could not be substantiated. We modified the report to reflect the updated data on billings for fiscal year 2003. However, we disagree that payment problems could not be substantiated. Provider contracts with DMH were tied to the billing projections, which meant that DMH could not pay claims for providers who exceeded their projections until their contracts were changed. The court monitor’s 2003 reports also indicate that claims payment has been an area of concern. Our draft report acknowledged that DMH had made the necessary contract changes to allow providers to be paid for the remainder of fiscal year 2003. Additionally, we cited DMH’s plan for fiscal year 2004, which aimed to prevent similar billing problems from occurring. 4. With regard to our assessment of DMH’s status in meeting court expectations, DMH commented that it believes table 2 reflects our assumption that DMH has not begun work on meeting the 15 system performance targets or begun using consumer functioning and consumer satisfaction data. DMH stated that it has not reported on these steps but has initiatives under way to meet each one and therefore the table should reflect that the “step has been started.” As of March 2004, the court monitor had not received evidence that these steps were in process, but confirmed that DMH had conducted preliminary work that had not been captured in court documents. Thus, we modified the report to reflect that these steps were “in planning.” In addition, the report refers to the work under way to meet the exit criteria, such as the consumer telephone survey conducted in 2003 to help DMH develop its consumer satisfaction review and data collection efforts from providers for some of the exit criteria measures. 5. DMH commented that the draft report did not indicate that there were no standards for provider audits, that provider audits had never been conducted, and that DMH expected that providers would not be in compliance. The draft report stated that DMH’s new responsibilities for regulating and monitoring providers, including conducting audits, were a shift away from the structure of its predecessor office and that the monitoring framework was in the early stages of implementation. We revised the report to reflect DMH’s expectation that providers would not be in compliance with the new standards. 6. With regard to the draft report’s discussion of unusual incidents, DMH noted that the District’s mental health system had never experienced a review of unusual incidents and stated that unusual incidents ranged in severity from consumers returning late for dinner to injury and abuse. DMH also stated that it is faced with thousands of unusual incidents and said that it sorts through incidents quickly and is beginning to identify trends. We modified the report to reflect the range of severity of unusual incidents. 7. DMH commented that the report’s subheading, “Enrollment and Billing System Is Designed to Coordinate Clinical, Administrative, and Financial Processes” represents a significant misapprehension. DMH stated that the enrollment and billing systems are not the major functions in the design of the clinical, administrative, and financial processes. DMH characterized the billing system as an administrative function that helped with the transition from a grants-based system of delivering services to a performance-focused fee-for-service (FFS) system. We believe that the enrollment and billing system is an important design component. For example, the final court-ordered plan outlines that a comprehensive enrollment and billing system that links payment to treatment is necessary to access federal Medicaid revenue through the mental health rehabilitation services option, which was identified in our October 2000 report and in the final plan as a key component for reforming the District’s mental health system. Further, DMH’s enrollment and billing information system is used to enroll consumers, reimburse providers, and, according to DMH officials, may eventually be the primary mechanism for collecting the performance data required to meet the Dixon exit criteria. 8. Regarding our findings on consumer choice and community follow-up after a consumer’s discharge from the hospital, DMH stated that comments from the court monitor and the local Protection and Advocacy for Individuals with Mental Illness (PAIMI) agency (University Legal Services), were not quantified and that the report provides no other basis upon which to assess their reliability. As of January 2004, DMH was in the process of developing methods to track consumer choice and had not reported data to the court on community follow-up after discharge from the hospital. Absent that data, we relied on the court monitor’s judgments regarding DMH’s progress in implementing the court-approved plan. Additionally, the District mental health advocates with whom we spoke are part of the federally- mandated protection and advocacy system. 9. DMH commented that our findings on DMH’s capability to measure performance against the exit criteria (1) presented an incomplete account of events leading to the development of the performance targets and (2) missed critical factors for why baseline data were not included in the exit criteria requirements, specifically, that having a baseline would be impossible because services did not exist before DMH became a department and there was no basis for comparison with other jurisdictions. In response to DMH’s first concern, we revised the report to clarify that the court monitor did not act alone to develop the targets for measuring performance against the exit criteria. Regarding the second concern, the draft report stated that baseline data were omitted because historical data are generally incomplete and many of the performance targets require the collection of new information from DMH and its providers. We modified the report to reflect that DMH was unable to identify comparable baselines from other jurisdictions. 10. With regard to our findings on data collection and integrity, DMH commented that the draft report did not take into account the developmental stage of the data collection process. DMH noted that some of the performance criteria do not lend themselves to electronic data collection, gaps in service utilization data for children and youth placed in residential treatment centers must be viewed in the context that five city departments carry out placements, and the draft report’s statement that there is a gap between the number of children and youth enrolled and the number receiving services lacks quantitative support. We modified the draft to reflect that the two performance measures related to homeless consumers do not lend themselves to electronic data collection, which was confirmed by the court monitor, and that addressing the gap in service utilization data requires coordination with other District agencies that typically have their own tracking systems. The draft report stated that according to the court monitor, the first consumer services review for children and youth revealed a gap between the number of children and youth enrolled and the number receiving services. DMH did not provide us with the number of children and youth enrolled and receiving services. In the absence of that data, we relied on the court monitor’s report, which cited the gap identified by the consumer services review. Major contributors included Susan Barnidge, Laura Sutton Elsberg, Kevin Milne, and Elizabeth T. Morrison.
Since 1975, the District of Columbia has operated its mental health system under a series of court orders aimed at developing a community-based system of care for District residents with mental illnesses. Placed in receivership from 1997 to 2002, the District regained full control of its mental health system in 2002 but has been ordered to implement a courtapproved plan for developing and implementing a community-based mental health system. Additionally, the District must comply with exit criteria, which must be met in order to end the lawsuit. The court expects that it will take the District 3 to 5 years to implement the courtordered plan and begin measuring performance against the exit criteria, with year 1 beginning in July 2001. GAO was asked to report on the current status of the District's efforts to develop and implement (1) a mental health department with the authority to oversee and deliver services, (2) a comprehensive enrollment and billing system that accesses available funds for federal programs such as Medicaid, (3) a consumer-centered approach to services, and (4) methods to measure the District's performance as required by the court's exit criteria. The District created the Department of Mental Health (DMH) in 2001 to oversee the provision of mental health services. DMH methods of oversight have included establishing certification and making use of licensing standards for participating providers and beginning to monitor provider compliance. DMH also continues to deliver direct services, acting as the primary provider for 55 percent of all consumers enrolled in the mental health system as of October 2003, and operating over 500 beds at St. Elizabeths Hospital, the District-run institution specializing in inpatient care for people with acute, intermediate, and long-term mental health needs. DMH has also implemented a comprehensive enrollment and billing system designed to coordinate clinical, administrative, and financial processes. The system links payment to consumer treatment and increases access to federal funds by providing mental health rehabilitative services through the District's Medicaid program, which reimbursed DMH $17.5 million in federal Medicaid funds in fiscal year 2003. Providers have faced challenges managing cash flow in a fee-for-service system where service demand varies throughout the year. Also, because provider contracts were tied to the feefor- service billing projections, DMH could not pay claims for providers who were exceeding their projections until their contracts were changed, and providers did not always receive timely claims payments in fiscal year 2003. DMH senior officials noted that DMH has a plan in process to prevent this problem from recurring. DMH activities to increase the involvement of consumers in their own treatment and recovery process are evolving. While DMH has established a number of requirements in two key areas--consumer choice and consumer protection--its initial review of providers' records showed gaps in documentation of consumer participation in treatment planning for 41 percent of the records reviewed. Consumer protection policies are also evolving, as DMH instituted a consumer grievance policy that provides a uniform process for ensuring that all consumer grievances are tracked. DMH is developing data collection methods for 17 performance targets aimed at determining the system's performance against the court's exit criteria. Although the court monitor expects DMH to both measure and improve its performance in fiscal years 2004 and 2005, DMH faces major challenges in accurately measuring its performance, including establishing methods to collect electronic data, correcting known data deficiencies, and working with providers to submit accurate data. In its comments on a draft of the report, DMH indicated that the report did not reflect the entire spectrum of progress made since the creation of DMH. While the progress cited by DMH is important, GAO believes that focusing on DMH's status in meeting the exit criteria is an appropriate gauge of its overall compliance with the Dixon Decree.
IRS relies on data from SSA to determine the accuracy of SSNs and names recorded on tax documents submitted by individual taxpayers. IRS uses this information to establish the identity of each taxpayer and to ensure that each transaction is posted to the correct account on the IMF. When processing paper tax returns with missing or incorrect SSNs, IRS service centers first try to make corrections by researching IRS files or other documents (for example, Form W-2 wage and tax statements) that accompany a tax return. Returns that can be corrected, along with those that match SSA records, are posted to the “valid” segment of the IMF. Returns that cannot be corrected are posted to the “invalid” segment of the IMF, using either the incorrect SSN on the tax return or a temporary number assigned by IRS. As of January 1, 1995, 4.3 million accounts were posted on the invalid segment of the IMF, and 153.3 million accounts were posted on the valid segment. IRS created the invalid segment of the IMF to store the accounts of taxpayers who had changed their names, because of marriage or divorce for example, and had not yet informed SSA of the name change. However, IRS has posted returns to the invalid segment of the IMF to cover other situations, such as when a taxpayer (1) uses the SSN of another individual, (2) uses an SSN that is not issued by SSA, or (3) is assigned a temporary number. IRS tries to resolve invalid accounts and move them to the valid segment of the IMF by corresponding with taxpayers to verify their identities, periodically matching invalid accounts against updated SSA records, and reviewing tax documents subsequently filed by taxpayers. Our objectives were to (1) measure the growth of accounts on the invalid segment of the IMF, (2) assess IRS’ procedures to verify the identities of tax return filers whose returns were posted to the IMF invalid segment, and (3) identify any effects the procedures may have on IRS’ TSM goals and its income-matching program. To measure the growth of accounts on the IMF invalid segment, we reviewed IRS management and internal audit reports about the growth and composition of accounts on the IMF. We also interviewed officials at IRS’ National Office on the makeup of the IMF invalid segment and the reasons for the growth in these accounts. To assess IRS’ procedures for verifying taxpayer identities, we reviewed (1) IRS procedures (1995 and pre-1995) for processing returns with missing or incorrect SSNs, (2) the notice IRS uses to verify taxpayer identities, and (3) other pertinent documents. We also interviewed officials at IRS’ National Office and at IRS’ Austin, TX; Cincinnati, OH; Fresno, CA; Ogden, UT; and Philadelphia, PA service centers on the process for posting returns to the IMF invalid segment and changes implemented in 1995 to verify taxpayer identities. We chose Cincinnati because of its proximity to the audit team conducting the work. We chose the other 4 centers because, out of IRS’ 10 service centers, they processed and posted more than 60 percent of the accounts on the IMF invalid segment in 1994. To identify the potential effects of IRS’ posting procedures, we did the following: We selected a random sample of 400 tax year 1993 returns from accounts that were posted to the IMF invalid segment before IRS implemented its new procedures. Our sample results are not projectable to the universe of accounts on the IMF invalid segment. Our objective was to determine whether the filers accurately reported their wages and withheld taxes. The sample consisted of returns with refunds of more than $1,000 that were posted to the IMF invalid segment by the Austin, Fresno, Ogden, and Philadelphia service centers between January 1, 1994, and June 30, 1994. The 400 returns included 50 from each center that had been posted with IRS temporary numbers and 50 from each center that had been posted with incorrect SSNs. The Cincinnati service center’s Criminal Investigation Branch contacted employers of the 400 filers to verify employment and wage information. The branch obtained responses on 357 returns. For the 43 returns with no response, we verified the wage information using information return transcripts. We analyzed 100 of the 400 returns to determine why they posted to the IMF invalid segment and to profile some of the filers’ characteristics. The 100 returns included 25 returns (12 that had been posted with temporary numbers and 13 that had been posted with incorrect numbers) randomly selected from each of the 4 service centers. Among the 100 returns were 58 that were posted to accounts containing a computer code that automatically released refunds. We also interviewed cognizant officials from IRS’ National Office and the previously mentioned service centers regarding any effects that returns with missing or incorrect SSNs may have on IRS’ income-matching programs and its TSM plans. We reviewed IRS reports on TSM plans and analyzed documents relating to IRS’ processing costs. We did our audit work from December 1993 through May 1995 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from you or your designee. On June 21, 1995, the Assistant Commissioner for Taxpayer Services, the Staff Chief for the National Director of Submission Processing, and other IRS staff, including representatives from the Office of Chief Counsel, provided us with oral comments. Their comments are summarized and evaluated on pages 13 and 14 incorporated in this report where appropriate. From 1986 through 1994, according to IRS data, the average annual growth rate of accounts on the invalid segment of the IMF was more than twice the growth rate of accounts on the valid segment—5 percent versus 2 percent, respectively. Figure 1 shows year-to-year growth rates since 1986. During this period, the number of accounts on the invalid segment of the IMF grew from 2.8 million on January 1, 1986, to 4.3 million on January 1, 1995, while the number of valid accounts grew from 130.2 million to 153.3 million. From 1990 through 1994, the size of the IMF invalid segment grew by about 821,000 accounts. Most of that growth (52 percent) resulted from IRS’ increased use of temporary numbers to process and post returns. Accounts with incorrect numbers made up the other 48 percent. The IRS National Office official responsible for monitoring accounts on the master file explained that the increase in accounts with temporary numbers stemmed from IRS’ decision in 1990 to not send verification notices to taxpayers whose returns were processed with temporary numbers. Many of these filers, he said, cannot obtain SSNs because they are not legal residents of the United States but are entitled to refunds of withheld taxes or earned income credits. He said that most of these taxpayers were using temporary numbers verified in previous years and that requiring reverification each year would have unduly increased taxpayer burden. He speculated that when IRS’ decision not to require verification became more widely known, more taxpayers who could not obtain SSNs began filing tax returns. Another factor affecting the number of accounts on the invalid segment of the master file was IRS’ willingness to release refunds and allow the accounts to remain on the invalid segment, even though taxpayers’ responses to the verification notice did not resolve the invalid condition. Before 1995, IRS accepted a taxpayer’s response that a return was “correct as filed,” and taxpayers were not required to provide documentation (marriage certificate, birth certificate, etc.) to verify their identities. In 1994, IRS paid out $1.4 billion in refunds on returns posted to the IMF invalid segment. As part of its efforts to combat refund fraud, IRS revised its procedures in January 1995 to require that taxpayers provide documentation to verify their identities. In announcing that IRS would delay refund claims for individuals lacking proper identification numbers, you stated that, consistent with the way financial institutions manage withdrawals of funds, IRS should not permit refunds from the federal treasury without a valid taxpayer identification number. Under the revised procedures, when a taxpayer’s return with a refund request is posted to the IMF invalid segment for the first time, IRS is to freeze the refund and correspond with the taxpayer in an attempt to verify the taxpayer’s identity. Filers with missing or incorrect SSNs who request a refund are to be required to provide a reasonable explanation for the discrepancy and proof of their identity (such as a marriage certificate, birth certificate, earnings statement, or passport) before the refund will be released. The requirement applies to filers whose returns are posted with temporary numbers as well as filers whose returns are posted with incorrect numbers. Once a taxpayer responds satisfactorily to IRS’ verification notice, IRS is to release the refund. Previously, IRS automatically issued refunds to filers with temporary numbers and did not require proof of identity from filers with incorrect numbers before releasing their refunds. IRS uses the CP54B notice to verify taxpayers’ identities before issuing a refund. The current version of the CP54B notice does not reflect IRS’ revised procedures. It does not clearly convey that persons who file with missing or incorrect numbers, including filers who were issued temporary numbers, are required to provide documentation verifying their identities. (Appendix I contains a copy of the CP54B notice annotated to show misleading or potentially confusing sections.) A revised version of the CP54B notice has been developed that reflects IRS’ revised procedures but, as of July 1995, had not been finalized. Until the revised notice is available, IRS National Office officials told us that they plan to use the current version of the notice, followed by additional correspondence if the taxpayer does not respond in accordance with the revised procedures. This practice will increase IRS’ processing costs, create additional taxpayer burden, and delay the issuance of some refunds. IRS expects to send out about 616,000 CP54B notices in 1995. IRS officials said that review and approval of the revised notice was taking longer than expected. As of June 21, 1995, the revision had been approved by the National Office Notice Clarity Unit and was being reviewed by the National Automation Advisory Group. That group is to assign a priority for making the computer programming changes necessary to finalize the notice. If the notice is not assigned the highest priority, we are concerned, on the basis of past work, that it will not be revised in time for use during the 1996 tax-filing season, beginning in January 1996. In December 1994, we reported on the lengthy notice-review process and noted that many recommended notice revisions were delayed or never made because of IRS’ limited computer-programming resources. As one way of avoiding computer-programming delays, we recommended that IRS test the feasibility of transferring notices to its Correspondex System—a more modern computer system that produces other types of IRS correspondence. IRS National Office officials told us that they do not plan to apply the revised procedures to filers with prior accounts on the IMF invalid segment who file again using the same name and number combination. Thus, these filers would not need to verify their identities before receiving future refunds, although the mismatch with SSA records may continue to exist. According to IRS data, at least 3.2 million of the 4.3 million accounts on the IMF invalid segment, as of January 1, 1995, will not be subject to the new procedures. Instead, IRS placed a permanent computer code on the accounts so that the system will automatically release future refunds. IRS’ rationale for exempting these accounts from the revised verification procedures is that most of these filers had already responded to a previous CP54B and requiring them to respond again would increase taxpayer burden. But responses to the previous CP54B were done under IRS’ old verification procedures, which, as we noted previously, did not require proof of identity. Thus, IRS has no assurance that the earlier responses were satisfactory. Our analysis of the reasons 58 tax year 1993 returns were posted to the IMF invalid segment with automatic refund release codes raised questions about IRS’ plans. We noted, for example, that 27 of the returns were filed by persons who either used SSNs not issued by SSA or used another individual’s SSN, including 11 filers who used SSNs belonging to children and 5 filers who used SSNs belonging to deceased taxpayers. Under these circumstances, IRS was less certain of filers’ identities than if taxpayers had filed using names and numbers that matched SSA files. Table 1 shows the circumstances under which those 58 returns were posted to the invalid segment of the IMF. Another reason for IRS to reconsider its decision to exclude some filers from the revised procedures is the fraud risk associated with accounts on the IMF invalid segment. Our analysis of 400 refunds of $1,000 or more that were issued to taxpayers whose returns were posted on the IMF invalid segment surfaced only one instance in which a taxpayer appeared to have misstated his wages and withheld taxes. In that instance, a return was filed with a wage and tax statement that had been issued to another person. However, there are other ways to get fraudulent refunds besides claiming improper wages and/or withholdings. IRS has developed a profile of high-risk filers that it uses to help identify potentially fraudulent returns. According to that profile, many filers whose returns are posted to the invalid segment of the IMF pose a higher risk of fraud than filers whose returns are posted to the valid segment. For example, IRS has determined that filers claiming the Earned Income Credit (EIC) are more likely to claim fraudulent refunds than those who do not. In April 1995, IRS’ Internal Audit Office reported that returns on the IMF invalid segment are four times more likely than returns on the valid segment (54 percent versus 12 percent, respectively) to include an EIC claim. Internal Audit also noted that 41 percent of the cases identified through September 1994 by IRS’ EIC Unallowable Program were filed with invalid SSNs. In contrast, according to Internal Audit, returns with invalid SSNs represented only 1 percent of the total individual Form 1040 population. Of the unallowable cases closed by IRS, 84 percent with invalid SSNs had EIC amounts reversed, compared with 69 percent with valid SSNs. Of the 100 returns posted to the IMF invalid segment in our sample, 90 claimed the EIC. Also, the filing status claimed on 40 of the returns in our sample matched another characteristic in IRS’ profile of high-risk filers. IRS’ new verification procedures, if applied to filers with pre-1995 accounts on the IMF invalid segment, could help to limit these risks because they would enable IRS to more easily identify filers who attempt to claim duplicate refunds. Under TSM, IRS plans to access account information on taxpayers, using either the primary or secondary SSN. IRS also plans to consolidate existing, separate taxpayer databases into a single database. With a single database and the ability to access account information on every taxpayer, IRS would be in a much better position to maintain accurate, up-to-date accounts and respond to taxpayer inquiries. Before IRS can effectively implement its plans, it will have to identify and merge multiple taxpayer accounts on its current files. For example, the current master file structure with its valid and invalid segments allows two or more taxpayers to have accounts under the same SSN, or one taxpayer to have several accounts under different numbers. To begin the clean-up process, IRS mailed out 189,000 letters in December 1994 to taxpayers whose returns were posted to the IMF invalid segment because they used an SSN that had not been issued by SSA. The letter instructed taxpayers to contact SSA to obtain a correct SSN. This effort is only a first step, however, and IRS will need to do much more to clean up the rest of its IMF records. IRS’ clean-up task is further complicated because IRS plans to include secondary filers (generally the spouse on a joint return) in its database. According to IRS data, as of February 1995, IRS had at least 47 million IMF accounts with secondary filers. Presently, IRS does not require that secondary IMF filers verify their identities. One particular complication, according to an IRS official, will involve merging the accounts of taxpayers who are secondary filers on the IMF valid segment and primary filers on the invalid segment. Currently, IRS does not try to merge these accounts. Each year, IRS matches the income claimed by taxpayers with the income reported by third parties on information returns. IRS relies on a taxpayer’s name and SSN, as reported on a tax return and associated information returns, to perform the matches. Discrepancies in reported income are used by IRS to detect underreported income or nonfiling of tax returns. In most cases, returns posted to the IMF invalid segment with temporary numbers are not available for use in IRS’ matching program. This occurs because temporary numbers are unique to IRS and cannot be matched against taxpayer identifiers on information documents. Omitting these taxpayers from IRS’ matching program hampers efforts to detect underreported income and nonfiling. In addition, posting returns with incorrect SSNs may complicate IRS’ matching program if information returns report income for a different name and/or SSN. Unless IRS is able to make corrections through the additional research it does to check for erroneous mismatches, false leads could be generated that siphon IRS resources away from more productive cases. IRS has developed a proposal that could alleviate some of the problems associated with matching returns posted with temporary numbers. IRS officials told us that many of the returns assigned temporary numbers involved nonresident or illegal aliens who are not eligible to obtain SSNs. Under the proposal, IRS would assign permanent Individual Taxpayer Identification Numbers (ITIN) to these taxpayers, following a process similar to that used by SSA to verify identities and assign SSNs. Taxpayers with ITINs would then be required to use their ITINs when filing tax returns, and their returns could be posted to the valid segment of the IMF. Persons with ITINs would also be encouraged to use their ITINs when engaging in financial transactions that are subject to information reporting. Those who did so would be included in IRS’ matching program. IRS is currently obtaining public comments on a regulation, signed by the Department of the Treasury on March 9, 1995, to implement the ITIN proposal. Since 1986, the number of accounts on the IMF invalid segment has grown faster than the number of accounts on the valid segment. IRS risks errors when issuing refunds to filers on the IMF invalid segment because it cannot verify a filer’s identity against SSA records. Moreover, some accounts on the IMF invalid segment cannot be included in IRS’ income-matching program. IRS took steps in 1995 that, when fully implemented, could help reduce the number of accounts on the IMF invalid segment. For example, IRS is doing more to verify the identities of taxpayers who file returns with missing or incorrect SSNs, and it plans to issue permanent identification numbers to taxpayers that could be used in IRS’ matching program. We identified several areas where IRS could make additional improvements. IRS has not finished revising the notice used to verify taxpayer identities, and our past work indicates that the revision process has been lengthy. The current version of the notice does not adequately explain IRS’ revised documentation requirements and is causing additional taxpayer contacts. To reduce taxpayer burden and IRS costs, it is important that the revised notice be available for the 1996 filing season. IRS is not applying its revised documentation requirements to taxpayers whose returns were posted to the IMF invalid segment prior to 1995 and who have a permanent refund release code on their accounts. Our review of accounts posted on the IMF invalid segment that would be exempted under IRS’ plan and IRS’ profile of high-risk filers raises questions about whether IRS should exclude such filers from its revised documentation requirements. Verification of these filers’ accounts should also help complete the cleanup of taxpayer accounts that will be necessary as part of IRS’ modernization. To improve the processing of returns with missing or incorrect SSNs and help clean up accounts currently posted on the IMF invalid segment, we recommend that you finalize the CP54B notice in time for use during the 1996 tax-filing season, and apply the revised documentation requirements to taxpayers who filed tax returns that were posted to the IMF invalid segment before 1995 and whose accounts now have a permanent refund release code. We requested comments on a draft of this report from you or your designee. The draft included three proposed recommendations. IRS officials, including the Assistant Commissioner for Taxpayer Services and the Staff Chief for the Director of Submission Processing, provided oral comments in a meeting on June 21, 1995. On the basis of their comments, which are summarized in this section, we modified one of our proposed recommendations and withdrew another. IRS agreed with the other recommendation. Because of the delays inherent in IRS’ current notice-revision process, our draft report included a recommendation that IRS assess the feasibility of producing the CP54B verification notice on the Correspondex System, as discussed in our December 1994 report. The Assistant Commissioner for Taxpayer Services agreed that a revised notice was needed, but she said that the best way to accomplish this is to proceed with the revision process currently under way. She assured us that the revised notice would be available for use during the 1996 filing season. Given the Assistant Commissioner’s assurances, we have revised our recommendation to delete any reference to the use of the Correspondex System. IRS agreed with our recommendation that it apply the revised documentation requirements to the IMF invalid segment accounts with permanent refund release codes. The Staff Chief said that a task force, working in cooperation with internal auditors, is determining the best way to verify accounts placed on the IMF invalid segment before 1995. IRS plans to focus on verifying active accounts, which they estimate make up 38 percent of the accounts on the IMF invalid segment. (An account containing a recent tax return, for example, would be considered active.) IRS also plans to remove IMF invalid segment accounts that have been inactive for a certain period, similar to the treatment of accounts on the valid segment. The task force is also working to reverse the permanent refund release code on the IMF invalid segment accounts that were established before 1995. IRS’ actions, if properly implemented, would respond to our recommendation. We also included a proposed recommendation in our draft report that IRS send back to taxpayers returns that are filed with missing SSNs or SSNs that were not issued by SSA. IRS data indicated that it was less costly to send these returns back to taxpayers than it was to post the returns to the master file, send taxpayers a CP54B notice, and process their responses. IRS disagreed with our proposal on the basis that an individual income tax return with a missing SSN or an SSN that was not issued by SSA is considered a valid return under the Internal Revenue Code. Because the return is valid, they asserted that a court would hold that the statute of limitations on assessment and collection would begin when the return was first filed, even though it was returned to the taxpayer because of the invalid condition. Thus, IRS might limit its ability to recover the return from the taxpayer and take any necessary enforcement actions if the process of resolving the invalid condition became lengthy. We considered IRS’ argument persuasive and have withdrawn our proposed recommendation. This report contains recommendations to you. The head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on these recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight not later than 60 days after the date of this letter. A written statement also must be sent to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this letter. We are sending copies of this report to various congressional committees, the Secretary of the Treasury, the Director of the Office of Management and Budget, and other interested parties. We will also make copies available to others on request. The major contributors to this report are listed in appendix II. If you or your staff have any questions about this report, you can reach me at (202) 512-9110. The following are GAO’s comments on IRS’ Notice CP54B (1994 Version). 1. The wording “REFUND DELAYED” is the only indication at the beginning of the notice that the taxpayer will not be receiving his/her refund and that the refund will be delayed until the taxpayer resolves the discrepancy to IRS’ satisfaction. 2. The notice does not accommodate filers who were issued temporary numbers. It gives instructions on what to do when there are differences in the last name or SSN, but it does not explain what filers with temporary numbers must do to have their refunds released. 3. A taxpayer might presume from the wording in this section that providing the information IRS requests will release the refund, when in fact, the refund would be released only if the new information matches SSA’s records. 4. This section of the notice does not require that a taxpayer send anything back to IRS and, again, does not make it clear that the taxpayer’s refund will not be released until the discrepancy is cleared up. All it says is “If you wish, you may provide IRS with . . . .” Service center staff told us that taxpayers are expected to provide this kind of information, and if it is not provided, IRS will correspond again with taxpayers to obtain it. 5. This section has problems similar to those described in comment 4. It does not require that taxpayers send anything to IRS and thus is not clear about how or on what basis IRS will decide to release the refund. Rachel DeMarcus, Assistant General Counsel Shirley A. Jones, Attorney Advisor The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Internal Revenue Service's (IRS) procedures for processing and posting tax returns with missing or incorrect social security numbers (SSN), focusing on: (1) the growth in IRS individual master file (IMF) accounts with missing or incorrect SSN; (2) IRS procedures for verifying the identities of tax return filers; and (3) the potential effect of these procedures on IRS plans to modernize the tax system and on the income-matching program. GAO found that: (1) the average annual growth rate for invalid IMF accounts was significant from 1986 through 1994; (2) IRS has revised its procedures to require taxpayers with missing or incorrect SSN or temporary numbers to provide documentation that verifies their identity; (3) these revised procedures could help reduce the number of invalid IMF accounts when fully implemented; (4) the IRS Tax Modernization System is in jeopardy because the master file structure allows two or more taxpayers to have accounts under the same number, or one taxpayer to have several accounts under different numbers; (5) the IRS income-matching program is hampered by posting returns to IMF invalid accounts; and (6) IRS plans to assign permanent taxpayer identification numbers to filers that are ineligible to obtain SSN and encourage the use of these numbers on information returns.
DOE has hundreds of nuclear facilities that are managed and operated for its program offices by contractors. DOE nuclear safety requirements define four categories of nuclear facilities based on the significance of their radiological consequences in the event of a nuclear accident. Hazard category 1 nuclear facilities, such as the Advanced Test Reactor at Idaho National Laboratory, have the potential for significant off-site radiological consequences. Hazard category 2 nuclear facilities, such as the Tank Farms at the Hanford Site, have the potential for significant on-site radiological consequences beyond the facility but would be contained within the DOE site. Hazard category 3 nuclear facilities, such as the U- Plant at the Hanford Site, have the potential for radiological consequences at only the immediate area of the facility. The final category is below hazard category 3 nuclear facilities, which are not considered to be high- hazard. The following figures show photographs of each type of high- hazard nuclear facility. DOE nuclear safety requirements stipulate that high-hazard nuclear facilities require special attention by the program offices and their contractors. There are at least 29 DOE rules and directives related to and specifically developed for nuclear safety (see app. II). DOE’s contractors must perform work in accordance with the department’s nuclear safety requirements to ensure adequate protection of workers, the public, and the environment. DOE program offices are responsible for reviewing and approving the safety basis for the design, construction, and operation of high-hazard nuclear facilities and any changes to the safety basis proposed by a contractor. The documentation of the safety basis (1) describes the work to be performed; (2) evaluates all potential hazards and accident conditions; (3) contains appropriate controls, including technical safety requirements; and (4) delineates procedures and practices for operating the facility safely. When a contractor discovers an unexpected situation that is not covered by the approved safety basis, DOE policy allows the program offices to grant the contractor the ability to temporarily depart from safety basis requirements to avoid shutting down a facility. In such cases, contractors may submit to DOE a Justification for Continued Operation (JCO) to amend the safety basis and address the unexpect situation. JCOs may include compensatory measures that must be employed until the situation is fully analyzed and addressed. DOE guidance suggests that JCOs should have a predetermined, limited as may be necessary to perform the safety analysis of the unexpected situation, to identify and implement corrective actions, and to update t safety basis documentations on a permanent basis. For example, a contractor recently discovered that a fire door leading to a room tha t stored nuclear material at Los Alamos National Laboratory was not sa JCO was employed, and all material was removed from the room until a new fire door was installed. HSS falls short of fully meeting our five key elements of effective oversight of nuclear safety: independence, technical expertise, ability to perform reviews and require that its findings are addressed, enforcement authority, and public access. First, we found that HSS has no role in reviewing the safety basis for new high-hazard nuclear facilities, no routine site presence, and its head is not comparable in rank to the program office heads. Second, HSS does not have some technical expertise in nuclear safety review and has vacancies in critical nuclear safety positions. Third, HSS lacks basic information about nuclear facilities, has gaps in its site inspection schedule, and does not routinely ensure that its findings are effectively addressed. Fourth, HSS enforcement actions have not prevented some recurring nuclear safety violations. Finally, HSS restricts public access to nuclear safety information. To be independent, an oversight organization should be structurally distinct and separate from the DOE program offices to avoid management interference or conflict between program office mission objectives and safety. While HSS is structurally distinct from the program offices, there are other components of independence that this office should possess— identified in past GAO reports—which are essential for HSS to function independently with respect to nuclear safety. These include (1) an independent role in reviewing the safety basis for new nuclear facilities or major modifications of existing facilities that may raise new safety concerns, (2) opportunities for independently observation of site operations on a routine basis, and (3) a head at the same rank as the program heads to independently advocate for nuclear safety. We found limitations in the structure and functions of HSS in each of these areas. HSS has no responsibility for routine review of the safety basis for new high-hazard nuclear facilities or for significant modifications of existing facilities that may raise new safety concerns; necessary to provide reasonable assurance—independent of the program offices—that the facility can be operated safely in a manner that adequately protects workers, the public, and the environment. As far back as 1981, we reported that the most practical reorganization option for nuclear safety oversight, in lieu of the preferred option of external regulation, was for DOE to establish a strong independent oversight office to mandate adherence to nuclear safety policies and standards. Such an office would guarantee program independence, uniformity, and public confidence in DOE self-regulation. In our 1986 report, we noted that safety basis approval process was conducted by the program offices at the sites and that this did not represent an independent review process. In our 1988 report, we not only recommended that the Congress establish an independent oversight organization for DOE’s nuclear defense facilities (that became the Safety Board) but also that the safety and health functions of HSS’s predecessor office, the Office of Environment, Safety and Health, be set in law to firmly establish its nuclear safety oversight responsibilities. In 1995, when DOE was assessing a shift away from self- regulation of nuclear safety, an advisory committee report recommended that in the preferred transition to external regulation, the Office of Environment, Safety and Health should, among other things, have this approval authority and exercise full authority and responsibility to inspect these facilities. Instead, HSS relies on periodic site inspections that assess a sample of the environment, safety, and health programs of a site, including a sample of the documentation supporting the safe operation of any high-hazard nuclear facilities. The Safety Board also performs reviews on defense nuclear facilities, including the design of new facilities, but it does not have a regulatory function. HSS has no staff permanently assigned to DOE sites and thus cannot make routine, independent observations of nuclear safety at them. We found in our 1981 report that having field safety and health personnel solely within the program offices at DOE nuclear facilities did not allow for independent oversight, particularly with respect to overseeing the implementation of nuclear safety policies by the program offices. We recommended that these staff report to an independent oversight office to ensure the proper emphasis on safety and to increase public confidence in the credibility of the department’s oversight. We noted that an on-site presence would permit frequent inspections and offer greater opportunities for day-to-day oversight, advice, and detailed knowledge of facility operations than would periodic site reviews by an independent oversight office. HSS primarily relies on periodic site inspections and the monitoring of information provided by program office facility representatives and enforcement coordinators, among other sources of information, to carry out its oversight responsibilities. The head of HSS, as a career professional, does not have the same position or rank as the program office heads from which to independently advocate for nuclear safety. In reporting in 1977 on options to restructure federal nuclear oversight responsibilities, prior to the formation of the DOE, we stressed the need to insulate an independent oversight office from developmental functions of the organization to ensure an independent voice for nuclear safety. Such action would include giving the head of the independent oversight office—appointed by the President and confirmed by the Senate—a specified term in office that would exceed the typical tenure of the head of the organization. In addition, this head should not be removed from office unless incapacitated or guilty of neglect of duty or malfeasance in office. Moreover, this head should have a professional background appropriate for the position, particularly with respect to nuclear safety. We continued to report on the need for such a position description in the 1980s. We found that absent a law establishing the position to head the independent oversight office, in the past, DOE was able to move this position to a lower level within the department—an action that could be considered a reduction in the visibility and attention given to environment, safety, and health issues by senior management, especially when compared with nuclear facility operations. In the 1988 report, we recommended that the Department of Energy Organization Act be amended to specifically establish the position of Assistant Secretary for the Office of Environment, Safety and Health in order to institutionalize this key component of DOE self-regulation of nuclear safety; however, this recommendation was never acted upon. Notwithstanding our past recommendations regarding this position, DOE officials have emphasized that the head of HSS has excellent access to the Secretary of Energy and other DOE decision makers and that the authorities of this position are at least equivalent to, and sometimes greater than, those of the head of HSS’s predecessor offices. While this may be the situation in the current Administration, we point out that a future head of HSS may not retain the same level of access to the Secretary of Energy in another Administration. HSS does not have some technical expertise to help the program offices review the safety basis for high-hazard nuclear facilities that existed in a predecessor office. The predecessor Office of Environment, Safety and Health had more than 20 technical experts in nuclear safety fields that provided this service, but they were not transferred to HSS at its formation. Besides this lack of previous technical expertise in nuclear safety review, HSS still needs some expertise to fulfill its oversight and enforcement responsibilities. HSS currently has 4 vacancies for nuclear safety specialists to aid in making sound safety assessments. The Office of Independent Oversight is short 2 nuclear safety specialists to fulfill its staffing level of 14 technical experts, and the Office of Enforcement is short 2 such specialists to fulfill its staffing level of 5 technical experts after one vacancy was recently filled. However, HSS officials told us that these two offices can and do rely on other internal HSS resources, well- qualified and experienced contractors, and program office personnel to help fulfill their responsibilities. HSS has been challenged to fill these vacancies in technical expertise and may be further challenged to address future vacancies with pending retirements from the workforce. HSS officials informed us about some difficulty in filling positions in nuclear safety related fields, in part because of competition for these specialists from other organizations, such as NRC. In addition, a senior HSS official informed us that about 56 percent of their workforce will be eligible for the early retirement program by the end of fiscal year 2009, but she anticipates that only 5 to 6 percent of the workforce will leave each year for the next several years. HSS plans to use recruitment, realignment, and training mechanisms to fill skills gaps within its approved budget and staffing authorization, and officials from this office told us they are confident they can address their technical staffing needs. Moreover, DOE officials explained that the department has supported HSS’s efforts to designate certain nuclear safety specialist positions as critical hires and to maintain an adequate technical resource base, including a judicious balance of federal personnel and contractor support. Nevertheless, concerns about technical capabilities within DOE are long-standing. For example, the Safety Board identified deficiencies in technical expertise as an issue facing all of DOE in its first report to the Congress in 1991, and remains concerned today, despite the efforts made by the department over the years in this area. Moreover, the DOE Inspector General recently escalated DOE human capital management from its “watch list” to its “challenge list,” given the department’s aging and smaller workforce. In commenting on a draft of this report, NRC also noted the well established human capital challenges associated with constructing, operating, and regulating nuclear facilities. HSS has the authority to and does conduct periodic environment, safety, and health program inspections of DOE sites with high-hazard nuclear facilities, but there are several limitations in its review functions. Our survey found that HSS lacks a comprehensive accounting of high-hazard nuclear facilities and the status of the safety bases for these facilities, which could provide additional information from which to direct its oversight activities. Moreover, we found that there have been extended periods of time between inspections of some sites with high-hazard nuclear facilities. Finally, while the program offices must address HSS site appraisal findings and respond to its comments on proposed correction actions, HSS primarily determines the effectiveness of the actions taken at the time of the next site inspection, which can take years. HSS lacks a comprehensive accounting of nuclear facilities and the status of their safety bases. DOE has extensive safety basis requirements for designing, constructing, and operating high-hazard nuclear facilities, including requirements for how contractors should create and update safety documentation and procedures, and for program office reviews and approvals of the safety bases for the nuclear facilities. While HSS maintains a database that tracks some information on the safety basis status of high-hazard nuclear facilities—the Safety Basis Information System—it relies on the program offices to update facility information. In addition, HSS officials told us that their office is developing procedures for updating the system but has decided not to expend resources on validating information in the database. We raised concerns in our 1987 report, however, that the independent oversight organization should not be too dependent on program office information for developing its own findings and recommendations. In conducting our own survey of high-hazard nuclear facilities across the DOE complex, we found that the HSS database was out of date, listing more of these facilities than were indicated to us by the program offices at the sites. We determined that DOE had 205 high-hazard nuclear facilities—2 category 1 facilities, 152 category 2 facilities, 45 category 3 facilities, and 6 that do not fit into one of the hazard categories. We also found that, as of December 2007, 31 of the 205 high-hazard nuclear facilities (about 15 percent) did not have an approved safety basis that meets current nuclear safety requirements. These requirements have been in place since 2001, when DOE required that contractors submit a safety basis for operating each high-hazard nuclear facility to the program offices for approval by April 10, 2003. We found that for 21 high-hazard nuclear facilities, old safety basis documentation had not been updated to current requirements, and for the 10 other facilities, initial safety basis documentation was still under development. HSS is currently not responsible for routinely monitoring the safety bases status of high-hazard nuclear facilities, ensuring that contractors update them to current requirements or that this be done in a timely fashion. The Idaho National Laboratory has about half of the high-hazard nuclear facilities that lack an approved safety basis that meets current requirements, and Los Alamos and Argonne national laboratories have several more. The safety bases for the Idaho National Laboratory nuclear facilities were approved under the previous program office and contractor in 2001, but the new program office and new contractor—which replaced previous management in 2004 and 2005, respectively—found inadequacies in the analyses supporting the previously approved safety bases documentation. The current program office, the Office of Nuclear Energy, is working with the contractor at this laboratory to upgrade the safety bases for these facilities but does not anticipate finishing all upgrades until 2012. In responding to a draft of this report, DOE explained that 2 of the 14 nuclear facilities at this site now have approved, upgraded safety bases, and that the Office of Nuclear Energy has put in place JCOs to address weaknesses in the previous safety bases of the other nuclear facilities until they can be upgraded, along with additional oversight. Also among the high-hazard nuclear facilities in this similar condition are three at the Los Alamos National Laboratory. For example, the Chemistry and Metallurgy Research Facility at this laboratory is operating under a safety basis established in 1998, although according to DOE, this facility has been subject to almost continuous safety review by both the contractor and the department. According to an October 2007 letter from the Safety Board, operating this facility in its current condition poses significant risk to workers and the public due to a number of serious vulnerabilities, such as the lack of a robust building confinement to prevent the release of radioactivity during an accident. Moreover, an August 31, 2007, staff report to the Safety Board on the design, functionality, and maintenance of safety systems at Los Alamos National Laboratory stated that many of the deficiencies at the Chemistry and Metallurgy Research Facility and other nuclear facilities at this laboratory resulted in part from the lack of modern and compliant safety bases. Likewise, we found that seven nuclear facilities at Argonne National Laboratory lacked approved safety bases meeting current requirements. According to an official from this laboratory, while there are no obvious risks at these nuclear facilities, several have uncharacterized nuclear waste that has been in storage containers for many years and may pose a risk of explosion or fire. HSS also does not routinely monitor changes to the safety bases of high- hazard nuclear facilities, such as use of JCOs, which allow facilities to temporarily depart from their safety basis to avoid shutting down operations. The Safety Board and DOE recently raised concerns about JCO usage at defense nuclear facilities. For example, the Safety Board noted in its April 19, 2007, recommendation to DOE that there were a number of weaknesses and deficiencies in the current JCO process, including JCOs that appear to have excessive durations. Moreover, the Safety Board found that the JCO approval process is site-specific and that none of the processes reviewed required the degree of analysis or rigor that would be expected for an important change or revision to the approved safety basis. Our survey found that, as of December 2007, nearly one-third (67 of 205) of the high-hazard nuclear facilities had at least one JCO in place with an average age of 340 days and an average total expected duration of 930 days. Our survey results found that one JCO has been in effect since March 2003, the expected end dates for many other JCOs extended out several years into the future, and DOE officials did not report an expected end date for 27 other JCOs. This does not fully conform to DOE guidance that calls for JCOs to be temporary amendments to the safety basis with a predefined, limited life. In response to the Safety Board’s concerns about JCOs, NNSA and the Office of Environmental Management issued informal guidance to the site offices to emphasize that JCOs are not to be used for planned activities. HSS’s Office of Nuclear Safety, Quality Assurance and Environment has been working with the program offices to review the current guidance on JCOs. DOE officials explained that its internal review found that some aspects of the guidance were sufficient but new guidance on the content and approval of JCOs was warranted. DOE further explained that it is pursuing these improvements. Nevertheless, HSS officials told us that the office is not responsible for routine monitoring of JCO use and instead, reviews the use of JCOs only during periodic inspections of DOE sites. HSS conducts inspections of DOE sites, but there are extended periods of time between inspections at some sites with high-hazard nuclear facilities. The Office of Independent Oversight and its predecessors have conducted periodic inspections at DOE sites that resulted in appraisal reports containing deficiencies requiring program office corrective actions, but there have been lengthy periods of time between inspections of some sites with high-hazard nuclear facilities. We found that the Office of Environment, Safety and Health Evaluations within the Office of Independent Oversight largely met its own internal guidelines to periodically visit sites every 2 to 4 years that are judged to pose relatively high risk of exposure to radiation. However, we found that of the 22 sites that had at least one high-hazard nuclear facility over the last 5 years, 8 were not inspected during this time period. We observed that one of these sites, the Office of River Protection, would be expected to have a site inspection at least every 2 to 2.5 years, according to HSS guidelines. However, in commenting on a draft of this report, DOE indicated that while HSS has not conducted a site inspection at the Office of River Protection since 2001, it did conduct a Type B accident investigation at this site after a 2007 tank farm accident. The other four sites are generally supposed to be inspected at least every 3 to 4 years, which was not the case. We suggested in our 1987 report on key elements of effective independent oversight of nuclear facilities that in the absence of day-to- day oversight, such reviews should be done annually. We found that these periodic reviews are important to maintain a working knowledge of DOE safety issues and to assess program office response to review findings and recommendations. Moreover, we stated that more frequent reporting would allow review staff to develop a better understanding of the program operations, rather than on a one-time or sporadic basis. The following table shows the number of environment, safety, and health program inspections from 1995 to 2007 at each DOE site with high-hazard nuclear facilities, although such inspections include just a sample of the nuclear facilities at a site. The number of nuclear facilities listed for each DOE site is the number of hazard category 1, 2, and 3 nuclear facilities at each site, as of December 2007. The number of these facilities is dynamic, as new facilities are constructed or existing nuclear facilities are downgraded to below hazard category 3. These sites did not have any high-hazard nuclear facilities as of December 2007, with the exception of Brookhaven, which officially downgraded its hazard category 3 nuclear facility to below category 3 in April 2008. HSS does not routinely determine the effectiveness of corrective actions until it performs another site inspection, which can take years. The Office of Independent Oversight has the authority to conduct follow-up reviews to determine the status and progress of the corrective actions to address deficiencies identified in its appraisal reports, but in practice, HSS officials informed us that they generally rely on the next site visit to check on the effectiveness of these corrective actions. We identified five such site- specific follow-up reviews listed in the Office of Environment, Safety and Health Evaluations’ database of all appraisal reports since 1995. The time period between inspections of DOE sites, which in practice indicates when the effectiveness of the corrective actions can be independently assessed, is shown in table 1. We determined that the Office of Independent Oversight returned on average about every 3 years, since 2000, to the 7 sites with 13 to 38 high-hazard nuclear facilities. For sites with 2 to 7 high-hazard nuclear facilities, the office returned for another site inspection on average about every 6 years. For example, there was a 3-year period between a 2005 site inspection of Los Alamos National Laboratory and the 2008 site inspection before the Office of Independent Oversight reported that corrective actions taken to address some of its findings were not fully effective, as many of the same findings were cited again in the latest report. The Office of Independent Oversight’s appraisal program leaves DOE with no routine independent assessment of corrective actions to determine if they are effective and timely in addressing identified deficiencies. The use of HSS enforcement authority has not prevented some recurring nuclear safety violations, despite DOE requirements and Office of Enforcement guidelines to address this problem. The enforcement process under DOE procedural rules for nuclear activities dictates the consideration of factors that can increase the severity of the penalty, such as the duration of the violation, past contractor performance, and multiple examples of similar violations during the same time frame. The Office of Enforcement has put the contractor community on notice that enforcement actions involving recurring issues will generally result in significantly greater civil penalties than would otherwise be the case. This office has indicated that recurring violations are not acceptable and reflect insufficient management commitment to safety. Based on our analysis, we found that even though HSS has the authority to enforce compliance with nuclear safety requirements, over one-third of the most frequently reported violations of these requirements continue to reoccur without abatement. We analyzed the number of specific conditions of noncompliance with the nuclear safety requirements that were contained in entries to the Noncompliance Tracking System from 2005 to 2007. Our analysis found that there were 178 different noncompliance conditions reported, or separate violations of the nuclear safety requirements, and that the 25 most frequently cited conditions represented about 67 percent of this total. We determined that 9 of these 25 conditions of noncompliance occurred at the same or higher average frequency in 2007 as they did in 2005, despite an overall decrease in the number of nuclear safety violations over that time period. For example, contractors at some DOE sites repeatedly reported violating the same nuclear safety requirement for “performing the work consistent with technical standards,” the most frequently recurring violation across the complex from 2005 to 2007. According to HSS officials, as this is a broad category that encompasses all instances of procedural violations and inadequate procedures, it is not surprising that this violation is cited in the overwhelming majority of the reported violations. Yet, it is a violation that meets DOE’s reporting thresholds for safety significance and does in part reflect on the safety culture at these sites. Table 2 shows the number of times this violation has been self-reported by contractors at the DOE sites listed from 2005 to 2007. The Office of Enforcement has frequently taken actions at those sites in table 2 that continue to violate this nuclear safety requirement and some others. As shown in table 3, this office has been active at those sites with the most high-hazard nuclear facilities through the use of notices of violations, enforcement letters, and program reviews. For the sites listed in table 2, the Office of Enforcement has had some type of contact in at least 2 out of the 3 years since 2005. The actual number of notices of violations and enforcement letters levied against contractors for violating DOE’s nuclear safety requirements has been relatively small compared to the number of self-reported conditions of noncompliance that are entered into the Noncompliance Tracking System. Our analysis shows that voluntary entries into the tracking system have averaged around 220 per year since 1999, and the combined number of notices of violations and enforcement letters averaged about 12 per year during this time period. There was a slight reduction in the number of entries for nuclear safety violations between 2006 (235 entries) and 2007 (164 entries), representing approximately a 30 percent decrease in comparison to the previous 8-year average for nuclear safety violations. Figure 6 shows trends in the combination of notices of violations and enforcement letters with entries into the Noncompliance Tracking System from 1999 to 2007. One example of HSS enforcement actions is illustrated with events at the Office of River Protection site. Several events at this site in 2003 and 2004 led to a March 2005 civil penalty from the Office of Price-Anderson Enforcement of $316,250. In July 2007, another event, a spill of about 85 gallons of highly radioactive material at a different location at this site, was caused by the same contractor. This event resulted in a stop-work order at the tank farms, over $5 million in remediation and corrective action costs, $500,000 in fines from the Washington State Department of Ecology, a $30,800 fine from the U.S. Environmental Protection Agency, and a $500,000 contract fee reduction from DOE. A subsequent HSS accident investigation identified five issues related to the 2007 accident that were essentially the same as deficiencies the Office of Price-Anderson Enforcement identified in the 2005 notice of violation to the contractor. In June 2008, the Office of Enforcement fined the contractor $302,500 for the July 2007 accident. HSS officials told us that the safety performance of this contractor was a factor in DOE recently selecting a different contactor to manage and operate this site. In a recent NRC report on DOE regulatory processes at the Hanford Waste Treatment Plant, NRC also pointed out recurring problems at the Office of River Protection site but with a different contractor. NRC found that recurring issues led to two enforcement actions and a 2008 notice of investigation. NRC stated that this could be indicative of program implementation issues in 2003 or 2004 that were not fully addressed and resolved as of 2008. NRC concluded that actions by the Office of Price- Anderson Enforcement and other underlying issues indicate that significant safety program and quality assurance functions, such as controls on noncompliance conditions and corrective actions, were not effective over an extended period of time. HSS currently restricts public access to some nuclear safety information that might be important to surrounding communities and other interested parties. We found that there were public access restrictions on reviewing the Office of Independent Oversight appraisal reports. Officials from this office informed us that access is generally restricted to DOE, contractor, and federal officials who can show a need to see this information. While the public can access information on the activities of the Office of Enforcement, the public does not have ready access to certain databases, such as the Noncompliance Tracking System. HSS officials informed us that interested members of the public can review pertinent entries into this database through the congressionally mandated public reading room but only after an investigation is closed. In addition to these restrictions, both offices do not have fully transparent decision-making processes for selecting sites to inspect, although they publish procedures for undertaking their investigations. In contrast, the public has access to Safety Board technical reports, letters, recommendations, and DOE’s actions in response to the board’s findings. Moreover, the weekly reports of the Safety Board site representatives, covering their day-to-day observations of nuclear operations at selected DOE sites, are also made available to the public. In addition, the Safety Board publishes an annual performance plan that explains how it chooses what to review and provides a detailed listing of planned reviews. The shortcomings we identified in HSS with respect to the elements of effective independent oversight of nuclear safety are largely attributable to reductions in its responsibilities and resources and in those of its predecessors. DOE took these actions to support the program offices, where it deemed these responsibilities and resources more appropriately reside. More specifically, DOE reduced the role of these offices in nuclear safety oversight largely to avoid redundancy and to improve relations with the program offices. Similarly, technical expertise has been transferred to the program offices to strengthen their oversight capabilities. Moreover, limitations in HSS review functions substantially stem from the program offices taking primary responsibility for most aspects of the nuclear safety review process. In addition, HSS has not taken primary responsibility for preventing recurring nuclear safety violations because DOE views its role as secondary to the program offices. Finally, HSS limits public access to nuclear safety information because it is concerned about security and possible counterproductive contractor and program office behavior. DOE has reduced the role of HSS and its predecessors to provide independent nuclear safety oversight largely to avoid redundancy and to improve relations with the program offices. DOE began reducing the role of its independent oversight office with respect to nuclear safety after giving it significant responsibilities in the mid-1980s. In 1985, DOE restructured the Office of Environment, Safety and Health to give it more oversight tools and to integrate it into the operations of the department at all levels. For example, the Secretary of Energy at the time gave this office the authority to shut down any nuclear facility that presented a clear and present danger and also the authority to concurrently approve with the program offices the safety bases for new nuclear facilities and modifications at existing nuclear facilities. However, in the late 1980s, DOE created a separate office reporting to the Secretary of Energy, the Office of Nuclear Safety, and gave it the authority for routine review of the safety bases for defense nuclear facilities. The Office of Environment, Safety and Health was assigned the role of assisting the program offices in their reviews but only had three staff members assigned to this task. When the Office of Nuclear Safety was shifted into the Office of Environment, Safety and Health in 1993, its responsibilities for routine review of the safety bases for defense nuclear facilities did not transfer. The transferred technical personnel, now in the Office of Environment, Safety and Health, were given the responsibility for providing assistance to the program offices if requested or as directed by the Secretary of Energy. DOE has also eliminated the on-site presence for its independent oversight offices to in part reduce redundancies with program office personnel at the sites. The site representative program for DOE’s independent oversight office began in 1988, when the Office of Environment, Safety and Health decided to place its own representatives at four DOE sites. According to the then Deputy Assistant Secretary for this office, the site representatives provided valuable day-to-day observations of nuclear operations at these sites. For example, he told us that within months of their placement at the sites, site representatives located at the Savannah River and Rocky Flats sites documented safety problems that this official used to convince the Secretary and Under Secretary of Energy, as well as the pertinent program office, that a temporary shut down of some nuclear production facilities at these sites was warranted. These facilities were shut down, and at the time, he informed us that his office had the authority to review and approve restarting them. In 1990, the next Secretary of Energy moved the four site representative positions into the newly created Office of Nuclear Safety, which was given authority to routinely review the safety bases of defense nuclear facilities. The first head of the Office of Nuclear Safety immediately doubled the number of site representatives at the four sites. He informed us that these representatives were very effective and well trained and that the program offices and contractors did not like having them around. In 1993, the next Secretary of Energy merged the Office of Nuclear Safety into the existing Office of Environment, Safety and Health. In 1994, the site representative program peaked at 32 representatives at nine sites, although not all of them focused on nuclear safety. However, by 1999, DOE had reduced the program to 19 site representatives at seven sites. DOE shifted its position on the need for a site presence for its independent oversight office in 1999. At this time, a senior DOE official told us that the department began to view the independent site representatives as redundant and less effective in their oversight than the program office facility representatives, positions created in the early 1990s to provide independent assessments of safety to the site office managers. Moreover, HSS officials informed us that the unstated reasons behind the decision to eliminate a site presence for the Office of Environment, Safety and Health were that the site representatives no longer provided substantial value, there were significant difficulties in managing them from headquarters, the program offices began to complain about variability in their technical qualifications, and the contractors complained about getting conflicting directions. Following a 1999 comprehensive organizational review of the authorities and responsibilities of the Office of Environment, Safety and Health, DOE determined that its dual role as regulator and a resource for technical assistance was problematic. This finding led to the elimination of a site presence for the Office of Environment, Safety and Health. DOE decided instead to build up its facility representative and safety system oversight programs within the program offices. For example, at the Savannah River Site, DOE explained that there are now 30 facility representatives and 15 safety system oversight engineers. In addition, to compensate for the loss of this site presence, DOE decided that the Office of Environment, Safety and Health should increase the frequency of its periodic site inspections. Finally, DOE put a career professional in charge of HSS, instead of a Senate-confirmed appointee, for several reasons, including a desire to improve relations with the program offices. In forming HSS, DOE determined that the head of HSS needed to ensure that the office had a clear mission and priorities, worked constructively with program offices, was accountable for performance, and provided value to the department. Moreover, HSS officials told us that this decision was based on the belief that a career professional would be more effective in maintaining corporate memory through the changes in administration, particularly with respect to the time necessary to sustain nuclear safety improvements. In addition, they told us that a career professional is less beholden to a political appointee and less apt to shade the oversight results to reflect well on the current administration. We observe that some of this justification for a career professional is in line with the position description we previously suggested to head the independent oversight office, except that the current position is not Senate-confirmed. Technical expertise has been transferred to the program offices to strengthen their oversight capabilities. In forming HSS, DOE decided in large part to transfer more than 20 technical nuclear safety-related positions from the Office of Environment, Safety and Health—which had supported the safety bases reviews of the program offices—to these program offices to strengthen their review capabilities. DOE determined that while the program offices had gradually acquired more responsibilities and accountability for the review of the safety bases for high-hazard nuclear facilities, most of this resided at the site offices and not headquarters. Responding to the 2004-1 Recommendation of the Safety Board, DOE decided to establish the Central Technical Authority within the program offices at headquarters in order to provide additional awareness and assessment capabilities for monitoring site operations with potential for high-consequence events, such as nuclear facilities and operations. The Safety Board letter noted, among other things, that there had been a reduction in central oversight of safety. DOE officials explained that the positions that were established to provide the review capabilities of the Office of Environment, Safety and Health were transferred to support the technical expertise needed by the Chief, Defense Nuclear Safety for NNSA and Chief, Nuclear Safety for the program offices at headquarters. These chiefs head small groups of technical experts that provide the operational awareness needed by the Central Technical Authority—the three Under Secretaries of Energy—to oversee implementation of nuclear safety by the program offices at the sites. This operational awareness is gained by having these technical staff monitor reports and performance metrics, review site-specific and DOE complex-wide technical and safety documents, and conduct site visits. The Safety Board has accepted DOE’s approach to increasing central oversight of nuclear safety through this authority. Limitations in HSS review functions substantially stem from the program offices taking primary responsibility for most aspects of the nuclear safety review process. HSS officials acknowledge some limitations in their review functions against our elements of independent oversight but generally point to them as being program office responsibilities. For example, they acknowledge that the information in the Safety Basis Information System is not current and may have some inaccuracies, but they do not take responsibility for monitoring this system or validating the information on the safety basis status of nuclear facilities entered by the program offices. The number of high-hazard nuclear facilities without a safety basis meeting requirements set forth in 2001, which our survey found, is similar to a situation we identified in the early 1980s. We reported in 1981 and 1983 that some nuclear facilities were operating without approved safety basis documentation, despite a 1976 agencywide requirement. Moreover, we found that although the contractors had completed draft safety basis documentation for their high- hazard nuclear facilities 4 to 5 years earlier, DOE had yet to approve them because it did not give this effort enough priority. In 1985, the Office of Environment, Safety and Health was given the responsibility for updating the status of major nuclear facilities across the DOE complex. Currently, HSS officials explained that they and the program offices do not use the Safety Basis Information System, as it was only put in place to allow the public to monitor DOE progress in upgrading high-hazard nuclear facilities to meet current safety basis requirements. Instead, they use other mechanisms, including accident reports, noncompliance tracking, Safety Board reports, program office reviews, and the periodic site inspections. In addition, HSS has not been given responsibility for ensuring the program offices bring the safety basis for high-hazard nuclear facilities into compliance with current requirements. Moreover, in commenting on a draft of our report, DOE stated that the new safety basis requirements envisioned a transition period for upgrading high-hazard nuclear facilities, so some delay is acceptable. Further, DOE stated that for some facilities that are scheduled for decommissioning, upgrading the safety basis may be an unwarranted expenditure of resources that provide little additional safety. However, updating the safety bases of these nuclear facilities is now 5 years past the 2003 deadline, and the process of decommissioning facilities can heighten safety risks. HSS officials acknowledge that while there are gaps in meeting inspection frequency goals as defined in the appraisal process protocols, many of them are justifiable delays or otherwise allowed under the protocols. Office of Independent Oversight officials told us that staff have sometimes been shifted away from scheduled inspections when higher priority, unanticipated concerns arise, such as an accident investigation. In other situations, they told us that some sites are not inspected on schedule because these sites were in shut-down condition and a visit at the scheduled time interval would not have been useful. In addition, the site inspection protocols allow for less frequent visits to those sites that are determined to have effective self-assessment programs and acceptable ratings from past inspections. Finally, these officials told us that the Office of Independent Oversight does not want to return to a site too frequently because the program offices and contractors have complained about being overburdened with inspections, primarily their own. In addition, DOE officials told us that the technical staff to each Central Technical Authority is also expected to conduct comprehensive reviews of each site on a nominal 2-year cycle. Finally, HSS officials also acknowledge that they are not routinely involved in assessing the effectiveness of the corrective actions taken by the program offices and their contractors to the appraisal findings because this is considered primarily a program office responsibility. According to an Office of Independent Oversight official, staff resources are better used to conduct new site inspections than to conduct separate follow-up reviews to determine if the corrective actions effectively addressed findings from prior inspections. Nevertheless, we observe that in this area and other aspects of safety basis reviews, reliance on program offices to primarily conduct these activities can raise questions of conflict of interest. NRC raised some concerns about reliance on program office oversight in its recent report of DOE regulatory processes at the Hanford Waste Treatment Plant. NRC found that DOE focuses its oversight program on ownership responsibilities rather than on nuclear safety requirements. Moreover, NRC found that because of dual roles and responsibilities and lack of independence of the oversight organization and staff—that is, in the Office of River Protection—oversight by this program office would not be considered equivalent to NRC’s inspection program. For example, NRC stated that DOE’s audit and assessment program was not effective in identifying issues with the safety program and quality assurance functions, determining the extent of conditions, and resolving issues. In addition, NRC determined that because the program office staff had both regulator and owner responsibilities, effective staff review time on ensuring nuclear safety was less than NRC would apply in regulating a similar facility. Despite the issues identified by NRC with DOE’s regulatory processes at this high-hazard nuclear facility, NRC concluded that the DOE program, if properly implemented, is adequate to ensure protection of public health and safety at this DOE site. Nevertheless, NRC followed this conclusion with suggestions that DOE evaluate how to improve implementation of its requirements and the transparency of its decisions, and also explore ways to gain and maintain more independence between its regulatory oversight and project management functions. HSS has not taken primary responsibility for preventing recurring nuclear safety violations because DOE views its role as secondary to the program offices. HSS officials acknowledge that there is clearly room for improvement across the DOE complex with respect to recurring safety events and nuclear safety deficiencies. Officials from the Office of Enforcement told us that while addressing recurring violations is an office priority, the responsibility for preventing the recurrence of nuclear safety events extends to a number of organizations within the contractor and program offices. According to these officials, the inability to eliminate recurring violations is not solely attributable to the enforcement program, as this is primarily a program office responsibility. The program offices can and do use contractual mechanisms to penalize contractors for poor nuclear safety performance, as well as to encourage improved performance. These mechanisms include assessment reports that dictate that a problem needs correction, showing cause letters, stopping work direction, conditional payment for fee actions, and contract termination. For example, HSS officials told us that since 2005, the Office of Environmental Management has exercised conditional payment of fee actions 10 times to penalize contractors for poor safety performance. While an evaluation of these mechanisms is outside the scope of this review, we pointed out in a 1999 report that shortcomings in the implementation of performance-based contracting by the program offices—as an important mechanism to encourage compliance with nuclear safety requirements—have limited the department’s ability to hold contractors accountable for safe nuclear practices. We therefore recommended approaches to strengthen the enforcement program at that time. More recently, officials from the Office of Enforcement told us the office has escalated enforcement actions, where appropriate, including the penalty level, and has strongly encouraged contractors to perform more thorough root cause analyses of recurring violations. These officials also informed us that HSS plans to continue to help the program offices identify causes of recurrent violations through various means on both specific enforcement actions, such as through corrective actions, and on a program-wide basis, such as sharing lessons learned with enforcement coordinators, through conferences, and through other venues. While there are few enforcement actions taken against DOE contractors each year compared to the number of reported nuclear safety violations, Office of Enforcement officials told us that they take every action required against contractors that have significant nuclear safety violations and that they have the technical resources to do so. Significant violations would include those with potential nuclear safety impact, a history of similar violations by the contractor, or the presence of negligent or malevolent intent, among other factors. In addition, these officials told us that the decrease in notices of violations and enforcement letters over the last 2 years is not unusually low and that variation from year to year is normal. They attributed the recent decline in the number of entries into its Noncompliance Tracking System to the hesitancy of some contractors to report violations and also to new responsibilities for reporting worker safety and health noncompliance conditions. These officials indicated to us that they have notified the contractors and program offices of this trend and that they plan to initiate two program reviews in 2008 of contractors that could be underreporting violations. NRC found in its review of DOE regulatory processes at the Hanford Waste Treatment Plant that there were some similarities and differences between the enforcement programs. NRC reported that DOE’s enforcement requirements, guidance, and procedures contain many features that appear similar to the NRC enforcement process. For example, NRC also emphasizes the importance of its licensees identifying issues and implementing effective and complete corrective actions. However, NRC’s enforcement process is usually initiated by its inspectors during routine inspections, when potential violations are normally noted and discussed with the licensee at the time or shortly thereafter, thus beginning the enforcement process. In contrast, HSS’s Office of Enforcement has no presence at DOE sites to conduct independent routine inspections of specific facilities or programs for violations of the nuclear safety requirements, and its enforcement process takes a long time in comparison to NRC. NRC also noted differences in the threshold for taking an enforcement action—NRC has a low threshold for the significance of an event warranting an enforcement action compared to the consistently high threshold used by HSS. HSS limits public access to nuclear safety information because it is concerned about security and possible counterproductive contractor and program office behavior. HSS officials acknowledge that they have restricted public access to Office of Independent Oversight appraisal reports but that this was done for national security reasons after the terrorist attacks on September 11, 2001. However, HSS officials told us in May 2008 that the office is considering allowing public access to the Office of Independent Oversight’s Web site for unclassified appraisal reports. HSS has also restricted access to the data and processes it uses for various reasons. For example, Office of Enforcement officials informed us that information contained in the Noncompliance Tracking System is considered predecisional information that has the potential to lead to a federal investigation, and on that basis, it is inappropriate to make it publicly available. In addition, they informed us that the forms and specific written description of the Office of Enforcement’s screening process have not been made publicly available but that they have discussed this process with the program offices and contractor community. They have not disclosed more because they are concerned that this might limit enforcement flexibility and provide an opportunity for contractors to slant reported noncompliance conditions in a way that affects the outcome of the screen, without providing a commensurate benefit. We were also told that this screening process is not shared with the program offices, including the program office enforcement coordinators at the sites. DOE’s ability to effectively self-regulate its high-hazard nuclear facilities not only depends on vigorous oversight of contractors by the program offices, but also on active oversight of the contractors and program offices by an internal independent oversight office with no program responsibilities. Nearly all of the shortcomings in HSS with respect to our elements of effective independent oversight of nuclear safety are primarily attributable to DOE’s desire to strengthen the oversight of the program offices by concentrating the necessary responsibilities and technical resources within them. In part, this has been accomplished by removing some important nuclear safety oversight responsibilities and technical resources from HSS and its predecessors. Essentially, DOE’s approach to self-regulation rests on the assumption that personnel within the program offices can overcome any conflicts of interest in achieving program objectives while ensuring safety and that the current level of independent oversight and enforcement of nuclear safety by HSS is appropriate. In forming HSS, DOE decided to focus this office on providing the program offices, with the assistance and the tools necessary to solve problems and to improve performance, so that DOE sites can better accomplish the department’s missions and strategic goals. This is not the first time that DOE has altered the role of its independent oversight office with respect to nuclear safety. Over the years, DOE has been able to change this role because the responsibilities and authorities of this office with respect to nuclear safety are not set in law. In our view, DOE needs to strengthen HSS as an independent regulator of nuclear safety within its self-regulation approach. Using our elements of effective independent oversight, along with supporting criteria from our past work and current HSS guidance, we have concluded that HSS needs more direct awareness of site operations, greater involvement in facility safety basis reviews and monitoring, and stronger enforcement actions to address recurring violations of nuclear safety requirements. We believe that increasing HSS’s involvement in nuclear safety could increase public confidence that DOE can continue to self-regulate its high-hazard nuclear facilities and decrease the likelihood of a low-probability but high- consequence nuclear accident. In the August 2008 NRC report on DOE’s regulatory processes for the Hanford Waste Treatment Plant, NRC concluded that DOE’s program, if properly implemented, is adequate to ensure protection of public health and safety at this DOE site. However, NRC also suggested that DOE evaluate how to improve implementation of its requirements and the transparency of its decisions and explore ways to gain and maintain more independence between its regulatory oversight and project management functions. We believe that strengthening HSS’s role in overseeing nuclear facilities and operations and establishing HSS responsibilities in law if necessary, would do more to gain and maintain independence between these functions than would any procedural changes within the program offices. We recommend that the Secretary of Energy take actions to strengthen HSS’s independent oversight of nuclear safety. Such actions would include giving HSS the responsibilities, technical resources, and policy guidance necessary to 1. review the safety basis for new nuclear facilities and significant modifications to existing facilities to ensure there are no safety concerns; 2. monitor the safety basis status of high-hazard nuclear facilities and ensure that all such facilities operate under current nuclear safety requirements, including the appropriate use of Justifications for Continued Operations; 3. increase a presence at DOE sites with nuclear facilities to provide more frequent observations of nuclear safety, provide more independent information to facilitate any necessary enforcement actions, and more routine monitoring of the effectiveness of corrective actions taken in response to HSS findings of deficiency; 4. ensure that enforcement actions are strengthened to prevent recurring violations of the nuclear safety requirements; and 5. establish public access to unclassified appraisal reports. If the Secretary of Energy does not take appropriate actions on our recommendations, the Congress should consider permanently establishing in law the responsibilities of HSS as noted above with respect to nuclear safety or shifting DOE to external regulation by 1. providing the resources and authority to the Safety Board to oversee all DOE nuclear facilities and to enforce DOE nuclear safety rules and directives. 2. providing the resources and authority to NRC to externally regulate all or just the newly constructed DOE nuclear facilities. DOE, the Safety Board, and NRC provided written comments on a draft of this report, which are reprinted in appendixes VI, VII, and VIII, respectively. Each agency also provided detailed comments that we incorporated, as appropriate. More detailed comments on DOE’s letter appear in appendix VI. DOE stated that the draft report was fundamentally flawed and disagreed with many of the report’s conclusions, while in its detailed comments DOE generally agreed with three of our five recommendations. According to DOE the report was flawed because it evaluated HSS against GAO’s preconceived opinion of functions that should be assigned to HSS. As the report noted, the objectives of our review were focused on whether the structure and functions of HSS allow it to provide effective independent oversight of nuclear safety with respect to our elements of effectiveness. Our review was not intended to be a comprehensive assessment of safety management across the entire department. DOE rejected two of our recommendations. Specifically, DOE disagreed with our recommendations to strengthen independent oversight by giving HSS responsibilities and sufficient technical resources to (1) review and concur on the safety basis for new nuclear facilities and significant modifications to existing facilities that might raise new safety concerns and (2) maintain a presence at DOE sites with nuclear facilities to provide day-to-day observations on nuclear safety, provide information to facilitate any necessary enforcement actions, and to monitor the effectiveness of corrective actions taken in response to HSS findings of deficiency. Regarding the first recommendation concerning review and concurrence by HSS on the safety basis for high-hazard nuclear facilities, we believe that this is an appropriate function for an independent oversight office within DOE’s approach to self-regulation. Even DOE’s advisory committee on external regulation reported in 1995 that the independent oversight office should be granted this responsibility and authority in the transition to external regulation by NRC. The Safety Board also has independent review responsibilities for the safety bases for nuclear facilities and authority to force DOE to respond to its assessments. An HSS predecessor office had the technical expertise to perform these reviews— now transferred to the program offices at headquarters—and, as DOE explains, HSS still retains significant expertise to conduct such reviews, which it currently uses on a periodic basis through its site inspection program. We did, however, alter this recommendation to remove the need for HSS to concur with the safety basis in order to provide DOE with increased flexibility in using HSS in this review process. Regarding the second recommendation that HSS maintain a presence at DOE sites with high-hazard nuclear facilities, we believe that this is consistent with our previous recommendations and it is an essential component of a nuclear safety oversight organization that is supposed to function independently from the program offices, which have both safety and mission responsibilities. We did, however, alter this recommendation to state that HSS should increase its presence at DOE sites, rather than stipulate that it maintain a day-to-day presence. DOE stated that implementing these two recommendations would be expensive, redundant, and counterproductive to continuous improvement in nuclear safety, citing past experiences but offering no supporting analysis of impacts. DOE could implement these two recommendations in a variety of ways that could be economical and efficient. For example, regarding review of nuclear facility safety bases, DOE could rely on the existing expertise within HSS to conduct these reviews or it could shift technical staff from the nuclear safety oversight units within the program offices at headquarters (Central Technical Authority) into HSS. As for an HSS site presence, DOE could have this office perform more frequent and efficient site inspections or assign a minimal number of staff to sites with higher numbers of high-hazard nuclear facilities in order to promote greater awareness of site operations and to follow up on oversight findings and enforcement actions. In addition, DOE raised questions about the credibility of our evaluation that centered on three primary areas. First, DOE commented that by focusing on HSS’s responsibilities in isolation rather than as one element of DOE’s approach to nuclear safety, the draft report appeared to be based on the incorrect premise that DOE program and site offices are inherently ineffective and that all DOE oversight must be performed by HSS. Second, DOE states that the draft report lacked balance and selectively quoted information out of context. Third, DOE stated that the draft report drew erroneous conclusions based on an incomplete understanding of HSS’s mission and was oversimplified because it was developed by individuals with limited expertise in nuclear safety and with DOE’s approach to nuclear safety. We disagree with these contentions. First, the objectives of our review were focused on whether the structure and functions of HSS allow it to provide effective independent oversight of nuclear safety. Our review was not intended to be a comprehensive assessment of safety management across the entire department. HSS is a critical component of DOE’s self-regulation approach because it is the only DOE safety office intended to be independent of the program offices, which carry out the department’s mission responsibilities. Contrary to DOE’s assertion, we do not believe, nor did our draft report state, that DOE program offices are inherently ineffective or that all DOE oversight must be performed by HSS. Our draft report clearly noted that DOE’s ability to effectively self-regulate its high-hazard nuclear facilities depends on vigorous oversight of contractors by the program offices. However, we do believe that the program offices inherently lack independence and require oversight by an independent office with no program responsibilities. The concept of independent oversight is at the heart of our report. In any program subject to safety regulation, the regulated entity is ultimately responsible to ensure safety. This fact does not diminish the need for independent oversight. DOE program offices face competing and often conflicting goals of maximizing project performance and minimizing cost. The steps necessary to ensure safety and to independently validate these steps can run counter to achieving mission objectives. For example, in its comments, DOE cites the Facility Representative Program, which is managed by the program offices and provides an on-site presence at DOE nuclear facilities as a more extensive and more effective program than existed with HSS predecessor offices. However, the facility representatives have other responsibilities beyond safety, namely helping to ensure that program goals are achieved in a cost-effective manner. While the program offices will always have a critical role in ensuring safety and the usefulness of the Facility Representative Program is not in dispute, these activities are not a substitute for oversight by an office that is focused solely on safety and is independent from other mission responsibilities. Second, we also disagree with DOE’s comment that the draft report lacked balance and selectively quoted information out of context. For example, contrary to DOE’s claim, we detailed why DOE eliminated the independent site representative program, both in the Results in Brief section and in the body of the report. Moreover, in our discussion of NRC’s review of DOE regulatory processes at its Hanford Waste Treatment Plant, which DOE cites as an example of selective quotation, we provided examples of both positive and negative findings by NRC. Specifically, we noted that NRC reported that DOE’s enforcement requirements, guidance, and procedures contain many features that appear similar to the NRC enforcement process. To address DOE’s concerns, we have added NRC’s conclusion that, if properly implemented, DOE’s program is adequate to ensure protection of public health and safety. However, this does not negate NRC’s suggestion following its conclusions that DOE should explore ways to ensure its regulatory oversight is independent from its project management functions. Third, we disagree with DOE’s comments that the draft report draws erroneous conclusions based upon an incomplete understanding of HSS’s mission and that the report was oversimplified because of limited expertise with DOE’s approach to nuclear safety. Our draft report discussed HSS’s different functions and had extensive detail on the nuclear safety related functions of HSS’s Office of Enforcement; Office of Independent Oversight; Office of Environment, Safety, and Health Evaluations; and Office of Corporate Safety Analysis. DOE illustrates what it calls our lack of complete understanding of HSS’s mission by stating that we did not address the attention HSS has given to problems at the Office of River Protection. We specifically discussed the number of inspections at this site relative to other sites. We also discussed the number of enforcement actions and gave several examples. The point of our assessment was that this site has not received the inspections it should have based on HSS guidance and that the enforcement actions by HSS have not reduced the incidence of certain recurring violations of the nuclear safety requirements by contractors at this site. DOE also asserts that the draft report fails to acknowledge the wide variation in the type and status of DOE’s nuclear facilities and therefore incorrectly reports that there are significant gaps in HSS inspections of DOE nuclear sites. DOE further states that nuclear safety professionals would recognize that there are valid reasons why little value would be gained from inspecting certain sites, including sites where cleanup is essentially complete. Our draft report clearly noted in several places that there are a number of sites, including DOE’s Fernald, Miamisburg/Mound, and Rocky Flats sites that have largely completed cleanup activities and have no remaining high-hazard nuclear facilities. Our discussion of inspection gaps was focused on those sites that have or had high-hazard nuclear facilities. While we agree that there may be valid reasons for concluding that inspecting certain sites would result in little value, it is important to note that HSS’s own policy requires inspections every 2 to 4 years at high-hazard facilities. Of the 22 sites that had at least one high- hazard nuclear facility over the last 5 years, 8 were not inspected in the required time frame. One site, Hanford’s Office of River Protection, has received a site inspection only once since 1995, despite having four operating nuclear facilities. Even DOE’s Rocky Flats site—which was undergoing cleanup activities at the time of the inspections—received three times as many reviews. If little value is gained from inspecting sites where cleanup is under way, we question why HSS reviewed that site three times as often as a site with operational nuclear facilities. Finally, we disagree with DOE’s comment that the draft report was developed by individuals with limited expertise in nuclear safety and with DOE’s approach to nuclear safety. As our draft report noted, GAO began reporting on independent oversight within DOE in 1977. Over the ensuing years, we have produced dozens of reports examining nuclear safety and security issues at both DOE and NRC. Collectively, the GAO staff responsible for the draft report possess decades of experience examining DOE and NRC management of its programs, nuclear safety and security, and regulatory issues. The criteria we used to evaluate HSS are based on a long history of reviewing nuclear safety at DOE and supporting independent oversight and on discussions with outside nuclear safety experts. The Safety Board did not comment on our recommendations but wrote that the basic structure and authorities of the existing safety oversight organizations, including the board, provide a satisfactory framework for this function at those facilities under the board’s jurisdiction. The Safety Board urged that the draft report be amended to emphasize that its statutory powers constitute action-forcing authority that is, in part, reflected by DOE accepting and acting upon all of the 50 recommendations that it has issued. However, as noted in appendix V, there has been a decline in the number of Safety Board recommendations over the years, some past deficiencies addressed by recommendations still remain unresolved, and the pace of closing out many other recommendations has been slow. This raises questions about DOE’s responsiveness to the board’s recommendations. Nevertheless, we revised the report to address the board’s concerns and made other changes, as appropriate. NRC did not comment on our recommendations but instead provided one general comment and other suggested changes to clarify the text related to our citing information from various reports, particularly the most recent report on its review of DOE regulatory processes at the Hanford Waste Treatment Plant. As a general comment, NRC wrote that the current commission has not expressed a view on expanding its oversight role beyond the DOE facilities already subject to NRC regulation. We incorporated other suggested changes where appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Energy, the Chairman of the Defense Nuclear Facilities Safety Board, and the Chairman of NRC. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made contributions to this report are listed in appendix IX. In our review, we examined 1) the extent to which the Office of Health, Safety and Security (HSS) meets the elements of effective independent nuclear safety oversight and (2) the factors contributing to any identified shortcomings with respect to these five elements. To conduct our review, we examined HSS’s structure and functions and that of its predecessor offices—principally the former Office of Environment, Safety and Health and the Office of Safety and Security Performance Assurance—with respect to only meeting our elements of effective independent oversight of nuclear safety. We included in this review two HSS predecessor offices because HSS began operation in October 2006. We relied on criteria we developed in a 1987 report that reviewed legislation to establish the Defense Nuclear Facilities Safety Board (Safety Board), with the addition of enforcement authority, which was given to the Department of Energy (DOE) around the same time as the formation of the Safety Board. In some cases, we further defined these elements with recommendations from our past reports, HSS guidance, and through discussions with outside nuclear safety experts. To examine the extent to which HSS, as currently structured, meets the elements of effective independent nuclear safety oversight, we assessed the oversight and enforcement practices of HSS and its predecessor offices against our criteria for (1) independence; (2) technical expertise; (3) ability to perform reviews and have findings effectively addressed; (4) enforcement; and (5) public access to facility information. To conduct this assessment, we reviewed relevant DOE rules and directives; met with headquarters program office managers and HSS officials to discuss current and past oversight practices; collected and analyzed information obtained from documents and interviews with these officials and at the Oak Ridge National Laboratory and Y-12 National Security Complex, as well as the Office of River Protection and the Richland Office at the Hanford Site; and reviewed the database of HSS environment, safety, and health program inspection reports and enforcement activities. We assessed data on contractor self-reported violations of the nuclear safety requirements entered into the Noncompliance Tracking System, which we determined were sufficiently reliable for the purposes of this report, and safety basis information from a GAO-administered Web-based survey. Although DOE has the Safety Basis Information System (SBIS) database that tracks some information on the safety basis of nuclear facilities, we determined that the information included in this database was not sufficient for our analysis. To obtain reliable data, we developed a Web- based survey instrument to administer to DOE officials who are responsible for overseeing nuclear safety at hazard category 1, 2, and 3 nuclear facilities. The survey instrument included two parts. First, program office officials at the site were asked to provide details on the safety basis status for each nuclear facility for which they had oversight responsibility. Second, these officials were asked to respond to questions regarding guidance provided to them on safety basis information and the line of authority for approving the safety bases and any modifications to them. To identify the current list of DOE’s hazard category 1, 2, and 3 nuclear facilities for survey administration, we reviewed lists of nuclear facilities from each of the program offices and the National Nuclear Security Administration (NNSA) and e-mailed site officials to verify that the lists of nuclear facilities were accurate. Prior to administering the survey, we pretested the content and format of the survey with program officials at four sites to determine whether (1) the survey questions were clear, (2) the terms used were precise, (3) respondents were able to provide the information we were seeking, and (4) the questions were unbiased. We made changes to the content and format of the survey based on pretest results. The survey was designed as a Web-based survey with a unique username and passcode for each survey respondent. The survey was sent to 34 program officials that were collectively responsible for what we identified as the total number (205) of high-hazard nuclear facilities across the DOE complex. The survey field period was from mid-December 2007 to mid-February 2008 and the survey response rate was 100 percent. To determine the factors contributing to any identified shortcomings with respect to the five elements of effective independent oversight of nuclear safety, we analyzed documentary and testimonial evidence on current HSS practices and those of the former Office of Environment, Safety and Health. In addition, we reviewed documents and interviewed officials from the Safety Board and the Nuclear Regulatory Commission (NRC) regarding past and current experiences in overseeing or planning to oversee DOE nuclear facilities. We also discussed with them their capability to accept an expanded role in overseeing DOE nuclear facilities. Furthermore, we asked for perspectives on DOE oversight of nuclear facilities from former DOE senior officials, academics, and representatives from organizations who are knowledgeable about nuclear safety and DOE operations, including the Health Physics Society, a nonprofit professional organization whose mission is to promote the practice of radiation safety; Conference on Radiation Control Program Directors, a nonprofit organization of individuals that regulate and control the use of radioactive material and radiation sources; and the Government Accountability Project, a government watchdog organization. We also spoke with a representative from the Institute of Nuclear Power Operations about the functions of corporate safety offices in nuclear utility companies. We conducted this performance audit from April 2007 through September 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following table presents the DOE nuclear safety directives, which include rules, guidance, and orders. We obtained this list from HSS, which cautioned that it is not all inclusive. The list of directives are most related to and developed specifically for DOE nuclear safety. Other directives, such as those specifically related to worker radiation protection, public and environmental radiation protection, and DOE general management are important but are not listed in the table. This list also does not include technical standards that DOE may recommend or require for complying with nuclear safety requirements. These and other DOE directives can be obtained from http://www.directives.doe.gov. This appendix provides the aggregate results from our survey of DOE’s high-hazard nuclear facilities. The Web-based survey was comprised of two parts. The first part asked questions about the safety basis for each of the high-hazard nuclear facilities. Thirty-four respondents were asked to provide responses to these questions concerning DOE’s 205 high-hazard nuclear facilities. The second part asked questions about the general review process undertaken by the program offices. Because some questions were not answered by all respondents, the totals for each question do not necessarily add to the total number of survey respondents. Welcome to the Survey on Safety Basis Information at Nuclear Facilities. At the request of the Congress, the U.S. Government Accountability Office (GAO) is examining the effectiveness of the Department of Energy's (DOE) Office of Health, Safety and Security (HSS) in its independent oversight of nuclear safety at DOE facilities. As part of this review, we have prepared two surveys for DOE officials who oversee nuclear safety at sites that contain these facilities. Questions on this survey include information on the safety basis status for hazard category 1, 2, or 3 nuclear facilities overseen by your site office. Q2. What is the hazard category of ? Hazard category 1 Hazard category 2 Hazard category 3 Below hazard category 3 Other Don't know Q3. What is the operational status of ? Q4. What is the current safety basis approval status of ? Safety basis is under development Q5. If the safety basis is under development, does have an approved preliminary safety basis under 10 CFR 830? Q7. Since January 2007, were there any Potential Inadequacies in the Safety Analysis (PISAs) identified for ? Q8. If yes, how many PISAs were identified? Q9. How many of these PISAs resulted in a positive USQ? Q10. Of the positive USQs that resulted from PISAs, how many resulted in Justifications for Continuing Operation (JCOs)? 28 Q11 and Q12. Of the positive USQs that resulted from PISAs, how many are: Q11a. Number that are currently unresolved Q12a. Number resolved through revisions to the safety basis Q12b. Number resolved through amendments to the safety basis Q12c. Number resolved through permanent exemptions . . . Q12d. Number resolved through temporary exemptions Q12e. Number resolved through other actions Q12g. Number resolved through JCO Q13. Is currently operating under a JCO? Q14. If yes, how many JCOs are currently in place? 67 Q15a. Difference between JCO approval date and JCO expected end date - in months Q15b. Length of time JCO has been in place from end of survey field period - in months Q16. Does currently have any approved exemptions under 10 CFR 830? Q17a. If yes, how many of these exemptions are temporary exemptions? For the general survey, more than one respondent from a site office responded to our survey. In some cases, not all respondents from the same site office necessarily provided the same response to the questions. As a result, if at least one site office respondent responded yes to a question, we coded the response from that site office as yes. Aggregate results from the 16 site offices are presented below. At the request of the Congress, the U.S. Government Accountability Office (GAO) is examining the effectiveness of the Department of Energy’s (DOE) Office of Health, Safety and Security (HSS) in its independent oversight of nuclear safety at DOE facilities. As part of this review, we have prepared two surveys for DOE officials who oversee nuclear safety at sites that contain these facilities. This survey includes a short set of general questions regarding safety basis guidance and approval authority. Q1. Has your headquarters line management issued any guidance on safety basis requirements that is supplemental to the guidance issued by HSS? Q2. Has your site office issued any guidance on safety basis requirements that is supplemental to the guidance issued by HSS? Q3. Does your site office have the authority to approve initial safety basis requirements at hazard category 2 and 3 facilities? Yes, for both category 2 and 3 facilities Q4. Does your site office have the authority to approve changes to the safety basis (such as amendments, revisions, and JCOs) at hazard category 2 and 3 facilities? Q5. Does your site office have the authority to downgrade facilities or activities to lower hazard categories? Two prominent options for external regulation of DOE nuclear facilities have been put forward to improve the effective independent oversight of nuclear safety. Most DOE high-hazard nuclear facilities are already subject to external scrutiny by the Safety Board, and a few are currently externally regulated by NRC. One option would be to restructure and expand the role of the Safety Board. This option appears practical but has not been advocated for by the Safety Board. The second option is to shift all or some additional DOE nuclear facilities to external regulation by NRC. This option also appears practical and acceptable in the past if given the necessary authority and resources, but the current commission has not expressed a view on expanding its oversight role beyond the DOE facilities already subject to NRC regulation. DOE and the Safety Board have taken issue with this option because of concerns about the transition costs versus the likely safety benefits of doing so. Most DOE high-hazard nuclear facilities are already externally reviewed, but not regulated for nuclear safety, by the Safety Board, and a few are already externally regulated by NRC. The Safety Board was established in 1988 to provide independent safety oversight of DOE defense nuclear facilities. The Safety Board was given responsibilities to (1) review and evaluate the content and implementation of the standards relating to the design, construction, operation, and decommissioning of defense nuclear facilities; (2) investigate any event or practice at these facilities that it determines has adversely affected or may adversely affect public health and safety; (3) analyze design and operational data, including safety analysis reports; (4) review new facility design and monitor construction, recommending any changes within a reasonable time period; and (5) make such recommendations to the Secretary of Energy, considering the technical and economic feasibility of implementing them. By statute, the Secretary must respond in writing to the Safety Board to accept or reject the recommendation and make this public. If the Safety Board transmits a recommendation relating to an imminent or severe threat, the Board shall also transmits it to the President and for information the Secretary of Defense. The President shall review DOE’s response and accept or reject the Safety Board’s recommendation. The Safety Board does not have the authority of a regulator but rather uses both informal interactions and formal communications with DOE to implement its statutory “action forcing” authorities. The defense nuclear facilities overseen by the Safety Board constitute 74 or 76 high-hazard nuclear facilities within NNSA and 80 of 90 high-hazard nuclear facilities within the Office of Environmental Management. The Safety Board does not have a role in overseeing nondefense nuclear facilities comprising 2 NNSA, 10 Office of Environmental Management, and 39 Office of Science and Office of Nuclear Energy high-hazard nuclear facilities. The 51 nondefense high-hazard nuclear facilities represent about 25 percent of the 205 such facilities across the DOE complex as of December 2007. The Safety Board, technical staff, and site representatives informally interact with DOE officials at the sites and headquarters and with the contractors during this process. The 10 site representatives at five DOE sites provide day-to-day observations of nuclear operations at the sites and, among other responsibilities, record these observations in weekly reports to the Safety Board. The site representatives have no role in enforcing DOE’s nuclear requirements, as this authority was never given to the Safety Board. Outside of informal interactions, the Safety Board uses its authority to issue letters and recommendations to and impose reporting requirements on DOE, publish technical reports, and hold public hearings on safety issues. The Safety Board noted in its 2007 annual report to the Congress that since 1989, it has issued 48 formal recommendations—comprising 221 individual subrecommendations—184 reporting requirement letters, and held 94 public hearings. The current number of recommendations is now 50. Starting around 1995, however, the number of Safety Board recommendations has declined from a range of five to seven per year since 1990 a range of zero to three per year through 2007. In September 2006, the Congress urged the Safety Board to evaluate whether more frequent use of recommendation letters would speed up resolution of issues with DOE. The Congress was concerned about delays primarily resulting from the untimely resolution by DOE of technical issues raised by the Safety Board during the design of the waste treatment plant at the Hanford Site. The Safety Board subsequently responded that it could provide timely resolution of most health and safety concerns regarding the design and construction of new DOE nuclear facilities without the need for it to resort to formal recommendations. While DOE has been responsive to the Safety Board’s recommendations, a number of past deficiencies remain unresolved, and the pace of closure for many other recommendations has been slow. According to the Safety Board, DOE has accepted all of its recommendations. However, some concerns raised by the Safety Board in its first annual report to the Congress, in February 1991, have not been fully resolved. These include shortcomings in nuclear safety analysis; lack of valid justifications for continued operations, possibly causing temporary or permanent curtailment of operations; and deficiencies in technical capabilities to effectively manage, direct, and guide nuclear operations. While this report pointed out the formidable problem of ensuring that DOE effectively applies its own rules at the time, the Safety Board noted the intentions of the Secretary of Energy to establish within DOE a new safety culture for nuclear activities. The pace of closure for many recommendations has also been slow. It has taken DOE up to 11 years to obtain closure from the Safety Board for some of its recommendations. Some systemwide recommendations, such as the one addressing safety management, have remained open for a decade or more. Of the 19 recommendations since 1995, 10 remain open, along with 1 more from previous years going back to 1992. DOE has sometimes struggled with the action-forcing nature of the recommendations from the Safety Board. Concerns about the authority of the Safety Board surfaced in a 1995 DOE Advisory Committee report, which found that the board was not subject to the same checks and balances as NRC is with respect to regulating NRC’s licensees. More recently, the chief of the technical staff to one of DOE’s Central Technical Authorities told us that in addressing seismic safety issues, the Safety Board has essentially tried to regulate from what he characterized as its advisory role. In May 2006, the Secretary of Energy sent a memorandum to the department heads to clarify the distinction between program office responsibilities and the role of the Safety Board. The Secretary wrote that DOE views the Safety Board as a “valuable asset” in meeting its obligation to ensure the highest standard of nuclear safety through its advice and observations but that the program offices have the authority and accountability for nuclear safety. This memorandum did not mention the role of the independent oversight office, now HSS. NRC is also involved in regulating some DOE nuclear facilities and has examined the possibility of regulating other facilities that had commercial application: In 1978, the Congress enacted the Uranium Mill Tailings Radiation Control Act, which established two programs to protect the public and the environment from uranium processing waste. This legislation required DOE’s cleanup and remediation of these abandoned sites to be performed with the concurrence of NRC. NRC granted DOE’s Idaho Operations Office a license in 1999 for the operation of an Independent Spent Fuel Storage Installation to store the spent fuel from Three Mile Island Unit 2 at the Idaho National Engineering and Environmental Laboratory. In 2003, NRC approved a license amendment to allow Nuclear Fuels Services, Inc., to possess and use Special Nuclear Material at its newly constructed uranyl nitrate building at its Tennessee complex. This facility and another one in Virginia, operated by another contractor, are not owned by DOE but work almost exclusively for DOE and the Department of Defense. These facilities are part of DOE’s program to reduce stockpiles of surplus highly-enriched uranium through reuse or disposal as radioactive waste. The contractor has agreed to implement enhanced security measures recommended by NRC. The Congress gave NRC an important role in licensing the construction and overseeing the eventual operation of two new DOE nuclear facilities; the geologic repository for high-level waste at the Yucca Mountain Site in Nevada for which DOE is the licensee and the Mixed Oxide Fuel Fabrication Facility at DOE’s Savannah River Site in South Carolina for which the contractor would be the licensee, if the application is approved. NRC has also been involved in reviewing the development of some DOE nuclear facilities that had potential commercial application. In the late 1970s, NRC got involved in reviewing DOE’s Fast Flux Test Reactor at the Hanford Site, which was to test advanced nuclear fuels, materials, components, systems, nuclear power plant operating and maintenance procedures, and active and passive reactor safety technologies that could have commercial application. Later, NRC got involved in evaluating more advanced design concepts, conducting preliminary licensing reviews, and preparing safety evaluation reports. However, DOE decided to deactivate this reactor in 2001 without going to commercialization. Starting in 1997, NRC also worked with DOE on the planned Hanford Waste Treatment Plant, then known as the Tank Waste Remediation System-Privatization Program. NRC provided assistance to DOE for over 3.5 years under a Memorandum of Understanding. The memorandum gave NRC the opportunity to acquire an understanding of the wastes and potential treatment processes, and allowed DOE to see how NRC would perform reviews and develop an effective regulatory program for the potential transition to its regulatory oversight. In the course of its work with DOE, NRC staff reported that they gained an understanding of the waste and treatment issues and found that, for the most part, standard nuclear industry methods could be used for risk reduction. However, NRC reported that it had identified over two dozen significant issues and over 50 specific topic areas in the design and approval approach DOE was considering that would require further efforts and analysis under the NRC approach. For example, NRC identified the influence that cost, schedule, and capacity were having on the review activities, as well as inconsistencies between the design and updates to the authorization basis in which DOE grants the contractor permission to perform certain operations. A senior DOE official that had been with a regulatory unit that was reviewing the design for the Waste Treatment Plant told us that this unit had also identified similar issues with the process. DOE eventually decided in May 2000 to abandon the privatization of this facility, citing, among other reasons, the high cost of privatization and declared its intent to pursue a more conventional DOE self-regulatory approach without any schedule for transitioning to NRC regulatory oversight. Most recently, NRC issued a report on its review of DOE regulatory processes for this plant. While restructuring and expanding the responsibilities of the Safety Board appears practical, the Safety Board has not advocated for this change in the past. The board could be given authority to oversee all DOE high- hazard nuclear facilities, approve the safety basis for designing and constructing any new facility, approve significant modifications to the safety basis of existing facilities, and enforce DOE nuclear safety requirements. The Safety Board already has on-site representatives at many DOE sites, and it is familiar with DOE’s nuclear safety requirements and oversight approach. Its safety reviews of the design and construction of new nuclear facilities are extensive, and it is equally accustomed to considering the requirements of nuclear safety and national security, as well as the safety risks, mission priorities, and costs in its recommendations. In addition, the Safety Board has experienced scientific and technical personnel, and the power to hire more such personnel without having to go through the civil service system. Moreover, the Safety Board’s legislation authorizes a staff of up to 150 but, according to the board, the Congress has limited the amount of authorized and appropriated funds such that the board has about 100 full-time employees, of which less than 60 are technical staff. The Safety Board, however, has not advocated for changing its authorities and responsibilities. For example, in a July 2007 report to the Congress, the Safety Board and DOE concluded that rigorous adherence to the existing responsibilities and powers set forth in present law would foster the early identification and resolution of safety issues without the need for legislative changes. Their report pointed out that during the past 2 years, the Safety Board and DOE had established several new expectations and requirements and were committed to continuous improvement of DOE’s project management directives. More recently, the Safety Board told us that it currently lacks the resources to take on more responsibilities, particularly for enforcement activities. In commenting on draft of this report, the Safety Board stated that even if it was directed to conduct a full suite of compliance activities comparable to those of the NRC licensing activities, significantly more resources than the summation of current staff plus HSS enforcement staff would be required. In regard to increasing site representation, we were informed that if the current DOE facility representatives were transferred to the Safety Board as independent inspectors, this would take away resources that the program offices would need to replenish to continue their current level of contractor oversight. The Safety Board also raised concerns in its fiscal year 2008 budget request about its own ability to recruit qualified engineers, in part because a renewed interest in commercial nuclear power has created competition for these specialists. Nevertheless, officials stated that the Safety Board would of course accept more responsibilities for regulating DOE nuclear facilities, as long as it has adequate funding, staffing, and legislative authority. However, in responding to a draft of this report, the Safety Board stated that the basic structure and authorities of the existing safety oversight organizations, including the board, provide a satisfactory framework for this function at those facilities under its jurisdiction. NRC’s experiences regulating and examining how it would regulate many DOE nuclear facilities indicate that shifting DOE nuclear facilities to its regulatory oversight appears practical, even though the costs and benefits have been questioned. As previously stated, NRC is currently involved in regulating a number of DOE nuclear facilities in construction or operation, as well as many uranium mill sites. NRC has also evaluated its capabilities and the potential costs of regulating additional DOE nuclear facilities. Beginning in October 1997, NRC tested regulatory concepts through simulated regulation of three DOE sites with nuclear facilities by evaluating each pilot facility against the standards that NRC believed would be appropriate for this type of facility. In a July 1999 report, NRC found that most of the technical, policy, and regulatory issues involving NRC oversight of these sites could be handled adequately within the existing NRC regulatory structure. In February 2003, the conference report accompanying the Consolidated Appropriations Resolution, 2003 directed that NRC carry out compliance audits of 10 DOE Office of Science sites in order for DOE to develop estimates of the costs necessary to bring the sites into compliance with NRC safety standards should the Congress direct NRC to assume regulatory responsibilities over these sites. In an April 2004 report, NRC again concluded that activities involving radiation- producing materials and machines at these DOE sites could be effectively regulated within the existing NRC regulatory structure. While NRC has not advocated for taking on regulation of DOE nuclear sites, it has identified some benefits in doing so. For example, in its 1999 report on external regulation of DOE nuclear facilities, NRC stated that its regulation would eliminate the inherent conflicts of interest arising in DOE self-regulation, leading to a safety culture comparable to the safety culture in the commercial industry, and allow the department to focus on its primary missions. However, in this report, NRC also stated that it would need adequate funding, staffing, and legislative authorization, as well as the opportunity to update its regulations as necessary. Other prominent stakeholder organizations have recently come forward with recommendations that the Congress consider shifting DOE to external regulation by NRC. These groups include the Health Physics Society, a nonprofit professional organization representing about 6,000 members whose mission is to promote the practice of radiation safety; the Conference on Radiation Control Program Directors, a nonprofit organization of individuals that regulate and control the use of radioactive material and radiation sources; the Government Accountability Project, a government watchdog organization; and the American Federation of Labor and Congress of Industrial Organizations. For example, the Health Physics Society informed us in an August 21, 2007, correspondence that self- regulation of nuclear safety by DOE is in contrast to the fundamental principle that a single, independent agency should have the authority to establish and enforce national standards for radiation safety. Moreover, the letter pointed out that reliance on national security concerns to justify continued self-regulation by DOE may no longer be compelling in light of the increased security environment under which NRC now operates. The Conference on Radiation Control Program Directors also provided us with a Board of Directors Resolution, dated August 7, 2007, that the Atomic Energy Act be amended to provide for the regulation of DOE by the NRC for materials authorized under the Act. The principal concerns with shifting DOE to external regulation of nuclear safety by NRC have been the transition costs versus the potential safety benefits that would emanate from eliminating self-regulation. DOE and NRC have differed on the cost and potential benefits of shifting to external regulation. DOE expressed concerns that transition costs would exceed any value in shifting to external regulation because of facility-specific issues, potential uncertainties and implications of NRC regulatory requirements, and the regulatory difficulty of licensing a single facility on a large and complex nuclear site. For example, DOE estimated the transition cost for NRC regulations of the Receiving Basin for Offsite Fuels Facility at the Savannah River Site to be between $6 million and $13.5 million, with annual costs thereafter estimated at $1.5 million to $3.2 million (in 1999 dollars). However, NRC countered that because few changes to DOE facilities or procedures would be needed under NRC regulation, the transition costs would be far less than estimated by DOE. NRC noted that DOE costs could be minimized and that the change might provide a net savings if DOE reduced the level of its oversight to one commensurate with a corporate oversight model. Nevertheless, NRC would have to increase its staffing levels to regulate DOE nuclear facilities, but at an uncertain number. A DOE working group on external regulation estimated in 1996 that NRC would need 1,000 to 1,600 new employees at a cost of $15 million to $200 million. The Safety Board has sided with DOE in questioning the cost and benefits of external regulation by NRC, early on raising national security concerns with external regulation. The National Defense Authorization Act for Fiscal Year 1998 required the Safety Board to make recommendations to the Congress on what role it should take in the event that the Congress should consider legislation for externally regulating DOE defense nuclear facilities. In its November 1998 report, the Safety Board rejected a shift to external regulation of DOE defense nuclear facilities for several reasons, including the potential adverse effects on national security and the likelihood that costs would outweigh any benefits that might accrue. Based on its review of factors that would attend to external regulation of these nuclear facilities, the Safety Board stated that it does not believe that additional external regulation of them is in the best interest of our nation. The board further stated that the Congress made the right decision in setting it up as an independent advisory agency, not a regulator, and that the contributions of the Safety Board since its inception attests to the efficiency of its structure. More recently, HSS officials told us that NRC’s regulatory structure and approach may not fit DOE’s operational model because of important differences from the commercial nuclear industry, such as having one-of-a- kind facilities. HSS contends that it has coordinated with and evaluated DOE’s initiative to strengthen program office oversight and that integrating these procedures into the fabric of the department’s way of doing business offers a viable alternative model to external regulation by an agency that is not familiar with the intricacies of the unique operations found at DOE facilities. In addition, HSS points out that external regulation is not a panacea solution and that there are oversight failures, such as NRC’s experience with the Davis Besse nuclear power plant. HSS also points out the steady improvement in measurable safety areas across the DOE complex and contends that an objective assessment of DOE’s safety performance contradicts the assertion that the department’s safety is lax or that it has pervasive problems and needs to be externally regulated. The following are GAO’s comments on the Department of Energy’s letter dated September 10, 2008. Our response to DOE’s letter is on pages 45 to 49. The following responses are to the detailed comments provided by DOE that were attached to the letter. 1. DOE is incorrect in stating that we did not recognize the primary role of the program offices in nuclear safety. We addressed DOE’s self-regulation approach on page 2 of the report and also on pages 13 to 14, as well as through a general discussion of responsibilities on page 36. For example, we provided a figure on page 16, obtained from DOE, of the roles, responsibilities, and authorities within DOE for nuclear safety. We clearly stated our research questions, criteria for evaluation, and the focus on nuclear safety on page 6. In addition, since we did not review the effectiveness of the program offices’ nuclear safety oversight programs, there is no basis for DOE to claim that we found this oversight to be ineffective or that we contend that all oversight must be performed by HSS. Moreover, DOE is incorrect in stating that we did not address the functions of the Central Technical Authority. We discussed these functions on page 39. While an evaluation of the role of the Central Technical Authority was not the subject of this review, we added more detail about it on pages 15 and 39. 2. We disagree with DOE’s comment that we discounted DOE and HSS perspectives that the former site representative program under a predecessor office did not work very well and resulted in giving conflicting directions to DOE contractors, which degraded the principle of line management responsibilities. We considered these perspectives, which we discussed on pages 37 to 38. We still believe that HSS needs to increase its site presence, but we did not prescribe how this should be accomplished. For example, HSS might increase the frequency of its site inspections or establish a minimal presence at sites with the most high-hazard nuclear facilities. We provided additional detail on page 37 regarding the role of the site representatives and DOE’s statement that site representatives from the independent oversight office were providing conflicting directions to the contractors. 3. We agree with DOE that the lack of HSS involvement in approving the safety basis is intentional, but we continue to believe that this is a valid example of a shortcoming in HSS’s functioning as an effective independent oversight office with respect to nuclear safety. DOE further stated that our conclusion is based on an incorrect premise that the program offices cannot perform an adequate review of the safety basis documentation. In addition, DOE stated that the unique nature of the facilities requires that the program office officials at the sites perform the reviews, not headquarters. First, our assessment of HSS’s current mission is based on GAO’s elements of effective independent oversight, along with supplemental criteria from our past work and HSS guidance. Second, we did not state that the program offices could not adequately review the safety basis documentation on high-hazard nuclear facilities. Third, we disagree that the site offices are the only ones that know enough about the facilities to conduct a safety basis review. For example, DOE acknowledged in its comments that technical staff for the Central Technical Authorities’ at headquarters, as well as HSS also get involved in safety basis reviews. According to DOE, the headquarters-based technical staff for the three Central Technical Authorities provide nuclear safety oversight and advice to DOE sites and these authorities. They maintain awareness of complex high-hazard nuclear operations at the sites, including safety basis implementation, nuclear facility startup, and personnel training and qualifications, among other things. In addition, DOE stated that HSS performs periodic site inspections that include nuclear safety basis elements, such as engineering design, configuration management, and safety basis. 4. DOE is incorrect in stating that we found the program office oversight to be ineffective and that all oversight should be performed by HSS. Our point is that HSS—as the only independent oversight office—needs to also participate in the safety basis review process as an important component of DOE’s self-regulation approach. 5. We disagree with DOE that potential conflicts of interest between mission objectives and safety will always exist in DOE and other industries that deal with hazardous materials. Our focus in this review was nuclear safety oversight and, as we stated on page 1, virtually all other federal nuclear facilities and all commercial, industrial, academic, and medical users of nuclear materials are regulated by NRC. Because these other entities are regulated by NRC, we also disagree with DOE that its system of checks and balances—with HSS providing an independent check of the program offices and the contractors—is similar to these other industries. The shortcomings we found in HSS as an effective independent overseer of nuclear safety indicates to us that this system of checks and balances is not in proper balance as it relates to nuclear safety. 6. We disagree with DOE that we misrepresented its position in forming HSS and that we inferred that these actions reduced the effectiveness of HSS’s oversight and enforcement functions. The statement about the mission of HSS came directly from a 2006 DOE report that set forth the rationale for establishing this office. According to this report, HSS was established as a corporate safety office similar to corporate safety offices in the commercial nuclear utility industry. However, unlike DOE, corporate safety offices of nuclear utilities operate under NRC regulation. In addition, DOE stated that it made these changes to strengthen HSS independent oversight and enforcement responsibilities by, for example, removing some management responsibilities. This may have been one objective in forming HSS, but we found that reducing some nuclear safety responsibilities and technical resources in HSS that once resided in its predecessor offices contributed to our findings that it does not fully meet our elements of effective independent oversight of nuclear safety. 7. We disagree with DOE that we do not understand its governance model for nuclear safety; as discussed above, we have described this approach in our report. We agree with DOE that we did not attempt to evaluate the effectiveness of DOE’s governance model and instead evaluated HSS against our elements for effective independent oversight of nuclear safety to develop our findings, conclusions, and recommendations. 8. We disagree with DOE that our evaluation methods were too narrow in scope to provide a valid assessment of HSS’s performance with respect to oversight, enforcement, and technical expertise. Our evaluation methods were appropriate to assess HSS against our elements of effective independent oversight of nuclear safety. An assessment of HSS against these elements and their criteria did not require us to review the quality of the appraisal reports, enforcement actions, or technical staff. Instead, HSS’s ability to perform reviews and have its findings addressed relied on criteria to assess the independence of the information available for these reviews, the frequency of the reviews, and the opportunities to independently determine the effectiveness of the actions taken to correct the identified deficiencies. In regard to enforcement, we evaluated the level of recurring violations rather than the quality of the paperwork used to document enforcement actions. Finally, in terms of technical expertise, our criteria required a review of the sufficiency of the staff rather than their technical qualifications. We found shortcomings in each of these areas, which lead to our conclusions and recommendations. 9. We disagree with DOE’s statement that we selectively cited an NRC report only to support our findings of HSS shortcomings. We quoted directly from the NRC report, and in several places, we discussed similarities between DOE’s and NRC’s approach. For example, we discussed how DOE’s enforcement program is similar to NRC’s program on page 43. However, we have added on page 41 of this report, and in our Conclusions on page 44 that NRC stated that it believes the DOE program, if properly implemented, is adequate to ensure protection of public health and safety. Nevertheless, we also pointed out in our report on page 41 that NRC suggested that DOE evaluate how to improve implementation of its requirements and the transparency of its decisions, and also explore ways to gain and maintain more independence between its regulatory oversight and project management functions. 10. We disagree with DOE that we mischaracterized information contained in the Safety Board’s Recommendation 2004-1; we quoted directly from the Safety Board’s recommendation. However, we revised the report on page 1 to add the Safety Board’s statement that DOE has a long and successful history of nuclear safety during which DOE developed a structure and requirements to achieve safety. Nevertheless, we noted on page 2 that our 2007 report found a record of recurring accidents and violations of the nuclear safety requirements at three DOE weapons laboratories. DOE also stated that we did not mention that its implementation plan to create the Central Technical Authority to fulfill one aspect of Recommendation 2004-1 was accepted by the Safety Board. We added this text to the report on page 39. 11. We disagree with DOE that NRC’s recent report, which concluded that DOE needs to increase the independence between its regulatory oversight and project management functions, only relates to the program offices and has no bearing on HSS. As our report states on page 41, NRC found that DOE focuses its oversight program on owner responsibilities rather than on nuclear safety requirements and suggested that DOE explore ways to increase independence between regulatory oversight and project management functions. We believe that it is reasonable to conclude from NRC’s report that DOE should consider opportunities to strengthen independent oversight both within the program offices and HSS. 12. We disagree with DOE that our identified shortcomings with the structure and functions of HSS are not supportable because we looked at HSS in a vacuum rather than in the context of DOE’s governance model. We evaluated HSS against our elements of effective independent oversight of nuclear safety, supplemented with recommendations from past GAO reports and HSS guidance. In our opinion, this is the role that HSS needs to play in DOE’s self-regulation approach. 13. We disagree with DOE’s claim that our independence criteria are not essential components for an independent oversight office. We added on page 21 that while HSS is structurally distinct from the program offices, there are also other components of independence that this office should possess, identified in past GAO reports, which are essential for HSS to function in this independent role with respect to nuclear safety. DOE also stated that HSS is similar to Occupational Safety and Health Administration and Environmental Protection Agency as independent oversight agencies without a site presence. However, nuclear safety has always been a special case for intense oversight. The NRC and the Safety Board are very involved in reviewing the safety basis for nuclear facilities, and these two organizations rely heavily on having a site presence at high- hazard nuclear facilities. DOE also said that we did not present any safety performance criteria. While this was not the subject of our review, we did note on page 2 that our 2007 report found a record of recurring accidents and violations of the nuclear safety requirements at three DOE weapons laboratories. 14. We question DOE’s justification for shifting the 20 nuclear safety review positions to the program offices from the former Office of Environment, Safety and Health to support oversight by the Central Technical Authority. For example, DOE stated that they placed these technical experts in the authority to help the program offices review and approve their nuclear facility safety basis, in part because of the challenge to get some sites to upgrade the safety basis of these nuclear facilities. DOE fails to acknowledge that it has increased the potential for conflict of interest in the review and approval of the safety basis for nuclear facilities by removing any semblance of remaining independent input to this process that once resided in an HSS predecessor office. 15. We agree with DOE that our assessment of its staffing situation did not provide a complete and accurate picture, such as the use of contractors, in the Results in Brief section of our report. We have added this to our Results in Brief section and also changed the number of current vacancies from three to two in the Office of Enforcement. We did address the use of contractors and other federal resources in the body of the report. 16. We disagree with DOE that the head of HSS has the same rank as a Senate-confirmed head of the program offices, even though they both may have direct access to the Secretary of Energy at this time. At the suggestion of DOE, we have added to the text on page 23 that DOE officials have emphasized that the head of HSS has excellent access to the Secretary of Energy and other DOE decision makers and that the authorities of this position are at least equivalent to, and sometimes greater than, those of the head of HSS’s predecessor offices. Importantly, we note that while the current head of HSS contends that he has access to the Secretary of Energy, there is no guarantee that a future head of HSS will enjoy the same level of access. 17. We clarified in our report on page 23 that our recommendation that the head of the independent oversight office be a Senate-confirmed individual at the same rank as the program office heads was not acted upon. 18. We disagree with DOE that the sites that were not visited by HSS in the last 5 years did not warrant a visit because they no longer have nuclear facilities. The sites with high-hazard nuclear facilities, by DOE’s definition, can pose serious consequences from an accident, and all sites that we included in our analysis had nuclear facilities operating within the last 5 years. DOE is also incorrect in stating that we chose not to include a 2007 site investigation of Los Alamos National Laboratory and a 2004 review of the Office of River Protection. We did not include the site investigation of Los Alamos National Laboratory because it was issued outside of the time frame of our analysis. Finally, we noted that the Office of River Protection was included in a lessons learned report but that it was not subject to a separate environment, safety, and health site inspection, and thus, is not reflected in table 1 on page 29 of this report. We added the 2007 accident investigation to the report on page 28, but not in table 1. 19. DOE is incorrect in stating that we did not provide a complete and accurate picture of HSS’s role in corrective actions. We stated on pages 19 and 30 that the program offices are responsible for preparing corrective action plans and that HSS has a role in reviewing these plans. While HSS inspection protocols indicate that most sites with high-hazard nuclear facilities should receive a site inspection every 2 to 4 years, we found that HSS had not inspected 8 of the 22 sites that had these nuclear facilities in the last 5 years. We also provided information on pages 40, regarding additional reasons HSS provided for not inspecting some sites on schedule. 20. DOE is incorrect in stating that we assumed that the scheduled oversight inspections are the only mechanism for reviewing corrective actions and that HSS should routinely review these corrective action plans. DOE is also incorrect in stating that we did not mention HSS’s option to perform reinspections or more frequent inspections if warranted and that we did not mention the frequency of other reviews. First, on pages 19 and 30, we addressed HSS’s involvement in reviewing the corrective action plans formulated by the program offices. Second, on page 30, we discussed the option to conduct follow-up reviews and found that they were done only five times since 1995. Third, on page 30, we accurately recorded how often HSS returns to sites for subsequent inspections. For example, we found that sites with two and seven high- hazard nuclear facilities, excluding those that no longer have such facilities, were only inspected on average once every 6 years. Finally, we did mention the other site reviews by the program offices, contractors, and now the Central Technical Authority on page 40. 21. We disagree with DOE that HSS is not the organization responsible for maintaining information on the status of nuclear facilities, that upgrading the safety basis of nuclear facilities is not and should not be a primary concern of HSS, and that HSS only needs to be concerned with whether the safety basis accurately reflects facility conditions and that appropriate controls have been implemented. First, HSS is responsible for maintaining the Safety Basis Information System (SBIS) that includes information on the safety basis status of high-hazard nuclear facilities and thus should be more accountable for the reliability of the information in this database because, according to DOE, the database is intended to allow the public to track upgrades of the facility safety basis. Second, we believe that HSS is the most appropriate office to hold the program offices accountable for upgrading the safety bases of their nuclear facilities to meet current requirements because, as our report noted, the program offices have been slow to accomplish this task. Third, as our report states, we believe that HSS needs greater responsibilities in the up front review of the safety basis of new nuclear facilities as well as major modifications of existing facilities because such an independent review reduces potential conflicts of interest inherent in reviews conducted by the program offices. 22. We disagree with DOE that we drew invalid conclusions from the SBIS database regarding the information available to HSS or the state of HSS knowledge. We do not dispute that the SBIS database is not used by HSS or the program offices; however, more effort needs to be made to ensure that the information in this database is updated because it is supposed to be available to the public to check progress made in upgrading facility safety bases. More importantly, this is the only database that attempts to provide information on the number and status of high-hazard nuclear facilities; information that we found was not fully known by the program offices at headquarters, as well as HSS. It seems reasonable to us that HSS should independently assess the accuracy of the information in the database and use it to monitor the safety basis status of nuclear facilities, particularly the use of JCO. 23. DOE is incorrect in stating that we did not discuss the time frame for the involvement of HSS’s predecessor offices in the review of safety basis. We clearly do this on page 36. An evaluation of why the safety bases approval process that existed in HSS predecessor offices may have been ineffective was not the subject of our review. 24. We disagree with DOE that our conclusions about HSS’s knowledge of the status of the nuclear safety bases are not valid because they are based on an inadequate assessment of HSS’s roles and responsibilities. We based our assessment on the structure and functions of HSS with respect to our elements of effective independent oversight of nuclear safety. We addressed HSS’s review process on pages 21 and 25. Starting on page 39 and continuing through page 41, we discuss the factors contributing to the three shortcomings that we believe affect HSS’s ability to perform reviews and have its findings addressed. 25. We disagree with DOE that we incorrectly used terminology and, thus, presented a misleading, inaccurate, and inflammatory perspective. DOE said that while it agreed that some facilities do not have an updated safety basis, we characterized this situation as noncompliant, inadequate, or not proper. This is incorrect. We clearly stated on page 26 that 31 nuclear facilities do not have safety bases that meet current requirements. We obtained this information directly from site office officials who we surveyed and who are the most knowledgeable about current conditions. DOE also stated that 10 CFR 830 envisioned a transition period to upgrade the facility safety bases. However, DOE did not mention that this transition period ended 5 years ago. We added language on page 40, as DOE suggested, that some DOE sites have yet to upgrade their safety basis to new standards and that some sites have a limited lifetime because they are scheduled for decommissioning, therefore, upgrading the facility safety basis for these sites may be an unwarranted expenditure of resources to provide little additional safety. 26. DOE agrees that we are justified in pointing out that some nuclear facilities do not have approved safety bases. However, DOE suggested that we failed to mention the interim measures that are being taken by the Office of Nuclear Energy at the Idaho National Laboratory to ensure adequate safety while additional upgrades are made. We have added on page 26 that 2 of the 14 facilities now have approved, upgraded safety bases, and that the Office of Nuclear Energy has put in place JCOs, as well as additional oversight, to address weaknesses in the previous safety bases for the other facilities until they can be upgraded. 27. DOE generally agreed with our analysis of the JCO issue. However, DOE provided additional information on other actions it has taken since the end of our audit time frame, namely that preparing further guidance regarding the content and approval of JCOs is warranted. We added this text to the report on page 28. 28. We disagree with DOE that HSS should not have a role in monitoring JCO use outside of periodic site inspections because, as our report notes, there have been inappropriate and excessive uses of JCOs that went undetected, in part because there was no central monitoring of their use. 29. DOE is incorrect in stating that we implied that it does not monitor changes to the safety bases of high-hazard nuclear facilities. We only stated on page 27 that HSS does not routinely review changes in the safety bases, such as use of JCOs. However, we did add on page 28 that HSS reviews the use of JCOs during its periodic site inspections. 30. We disagree with DOE that the problems identified by the Safety Board were primarily due to insufficient guidance that existed prior to issuance of DOE Guide 424.1, in July 2006, and that this situation has been corrected with new guidance. Our survey found additional use of JCOs 16 months after the issuance of this guidance. While our survey found that the average days of the JCOs was less than found by the Safety Board in its sample of defense nuclear facilities, we noted on page 27 that the expected duration of these JCOs was almost twice what the Safety Board reported. DOE incorrectly stated that we stated that the Safety Board attributed the prevalent use of JCOs to the structure of DOE oversight. 31. We disagree with DOE that we mischaracterized the role of HSS as secondary to the program offices in addressing nuclear safety violations. We took this characterization directly from information provided to us by HSS. In addition, DOE incorrectly stated that we stated that HSS should take over program office responsibilities. DOE also suggested that we implied that HSS has made some conscious decisions not to act to prevent recurring nuclear safety violations. On the contrary, we stated that HSS has made this a key issue to address with increasing enforcement actions. We only indicated that these actions alone have not impeded the recurrence of 9 of the top 25 violations of the nuclear safety requirements. 32. We disagree with DOE that our use of data in its Noncompliance Tracking System, from which we drew conclusions, is too narrow and meaningless. DOE also stated that we should be cautious in drawing conclusions from this database. This is the only database that DOE has to track violations despite the limitation DOE mentions for using it in our analysis. We determined that this database was sufficiently reliable for the purposes of our report. Moreover, an HSS official in the Office of Enforcement told us that this database was the main source of information used by this office, even though other databases are also reviewed, and that this office conducts program reviews to ensure that the contractors are entering data correctly. Another Office of Enforcement official told us that this database, the program reviews, and an occurrence reporting database are used to assess recurring and long-standing problems, but that this is assessment is informal and with the current staffing level there are limited resources to conduct the program reviews. In addition, as a check on the reliability of the data, this office also relies on enforcement coordinators at the sites, but this official told us that they work for the program offices and thus have some conflict of interest. In regard to recurring violations, we looked at these violations over a 3 year period across all sites, thereby ruling out outliers that DOE has offered as reasons for ups and downs in the number of reported violations. We also noted on page 32 that entries into this system have averaged around 220 per year since 1999. This suggests to us that our findings would not change if we added more years of violations to our analysis. Finally, we disagree with DOE that our conclusions are simply not supportable. DOE provided no evidence to show that what we found is inaccurate and also agreed with our recommendation that enforcement actions need strengthening. However, we added language on page 31, as DOE suggested to explain that the category of violations for “performing work consistent with technical standards” is broad in scope and includes all instances of procedural violations and inadequate procedures. Nonetheless, our report notes that these violations meet DOE’s reporting thresholds for safety significance and reflects on the safety culture at the sites. 33. To eliminate any confusion between the recurring violations at the Hanford Tank Farm and those at the Waste Treatment Plant, we modified the text on page 35 to clarify this distinction. 34. DOE stated that the 1-year time frame to take action for some recommendations may not be reasonable for a variety of reasons. The intent of this 1-year deadline was to encourage DOE to take quick action on what we believe is a critical issue independent oversight at DOE nuclear facilities. While we do not believe that DOE has convincingly argued that our recommendations are necessarily expensive, redundant, and counterproductive, we agree that careful planning is necessary. We have therefore modified the recommendation to remove the 1-year deadline to address DOE’s concerns. However, we note that 31 U.S.C. 720 requires the head of a federal agency to submit a written statement of the actions taken on our recommendations to the Senate Committee on Homeland Security and Governmental Affairs and to the House Committee on Oversight and Government Reform not later than 60 days from the date of our report and to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. In this written statement, we believe DOE should take the opportunity to detail not only the actions, if any, it intends to take, but also to specify the time required to take these actions in as economical and efficient a way as possible. DOE’s statement should also specify what recommendations or parts of recommendations the department does not intend to implement and its reasons. This information could serve as a basis for any additional congressional action, if appropriate, as envisioned by our Matters for Congressional Consideration. 35. We stand by our recommendation that HSS needs to be involved in the review of the safety basis for new nuclear facilities and significant modifications of existing facilities that may raise new safety concerns. We believe that this is a fundamental responsibility of an independent oversight office with respect to nuclear safety. 36. DOE generally accepted our recommendation on the need to increase its involvement in monitoring the safety basis status of nuclear facilities. 37. DOE is incorrect in stating that our recommendation to maintain a site presence for HSS includes an implicit recommendation to eliminate the existing oversight programs of the program offices. We also did not prescribe how HSS would maintain a site presence. However, we have modified this recommendation to replace “maintain” with “increase” a site presence in order to give DOE more flexibility in deciding how to obtain more routine awareness of site operations. 38. DOE agreed with our recommendation to strengthen the enforcement program but did not agree with the need for measurable goals. We modified our recommendation to exclude the requirement for measurable goals for enforcement because it now appears to us that it would be difficult to attribute any decline in recurring violations to only the enforcement actions by HSS because other factors could be attributable to a change, such as actions taken by the program offices. 39. DOE agreed with our recommendation that public access to HSS reports is desirable, as long as security requirements are met. 40. We revised the text on page 4 to more accurately reflect the DOE review of the external regulation option starting in the mid-1990s. 41. We revised the text on page 8 to replace the term “policy” with “internal guidelines.” 42. We revised the text on page 8 to state “do not fully conform to DOE guidelines.” 43. We made the suggested changes in figure 4 on page 16 to place “authorization agreements,” within site office responsibilities. 44. We revised the text on pages 41 to 42 that the program offices can and do use contractual mechanisms to penalize contractors for poor nuclear safety performance, as well as to encourage improved performance. These mechanisms include assessment reports that dictate that a problem needs correction, showing cause letters, stopping work direction, conditional payment for fee actions, and contract termination. For example, HSS officials informed us that since 2005, the Office of Environmental Management has exercised conditional payment of fee action 10 times over concerns about contractor safety performance. 45. We changed the text to clarify that the Chemistry and Metallurgy Research facility is operating under the safety basis established in 1998, although according to DOE this facility has been subject to almost continuous safety review by both the contractor and the department. 46. We are not making this recommended change because we believe that the cognizant program office official at the site has the most accurate information on the facility. 47. We changed the year to 2012 on page 26. 48. We added a note about Brookhaven to table 1 and table 3 on pages 29 and 33, respectively. 49. We added a note about New Brunswick to table 3 on page 33. 50. We changed the data for 2007 to adjust the line in figure 5 on page 34. 51. We revised the text on page 34. 52. We revised the text to include “notice of investigation” on page 35. 53. We revised the text regarding HSS plans to help the program offices identify causes of recurring violations on page 42. 54. We revised the text to add “other nuclear safety guidance” to table 4 on page 54, and changed the number from 26 to 29 rules and directives on page 14. 55. We cannot change the language of the survey instrument because we have already conducted the survey of DOE’s high-hazard nuclear facilities. 56. We revised the text to replace 1998 with 1999 on page 67. In addition to the individuals named above, Daniel Feehan (Assistant Director), Jeffrey Barron, Thomas Laetz, Omari Norman, Lesley Rinner, Benjamin Shouse, and Elizabeth Wood made key contributions to this report.
The Department of Energy (DOE) oversees contractors that operate more than 200 "high-hazard" nuclear facilities, where an accident could have serious consequences for workers and the public. DOE is charged with regulating the safety of these facilities. A key part of DOE's self-regulation is the Office of Health, Safety and Security (HSS), which develops, oversees, and helps enforce nuclear safety policies. This is the only DOE safety office intended to be independent of the program offices, which carry out mission responsibilities. This report examines (1) the extent to which HSS meets GAO's elements of effective independent nuclear safety oversight and (2) the factors contributing to any identified shortcomings with respect to these elements. GAO reviewed relevant DOE policies, interviewed officials and outside safety experts, and surveyed DOE sites to determine the number and status of nuclear facilities. GAO also assessed oversight practices against the criteria for independent oversight GAO developed based on a series of reports on DOE nuclear safety and discussions with nuclear safety experts. HSS falls short of fully meeting GAO's elements of effective independent oversight of nuclear safety: independence, technical expertise, ability to perform reviews and have findings effectively addressed, enforcement, and public access to facility information. For example, HSS's ability to function independently is limited because it has no role in reviewing the "safety basis"--a technical analysis that helps ensure safe design and operation of these facilities--for new high-hazard nuclear facilities and because it has no personnel at DOE sites to provide independent safety observations. In addition, although HSS conducts periodic site inspections and identifies deficiencies that must be addressed, there are gaps in its inspection schedule and it lacks useful information on the status of the safety basis of all nuclear facilities. For example, HSS was not aware that 31 of the 205 facilities did not have a safety basis that meets requirements established in 2001. Finally, while HSS uses its authority to enforce nuclear safety requirements, its actions have not reduced the occurrence of over one-third of the most commonly reported violations in the last 3 years, although this is a priority for HSS. These shortcomings are largely attributable to DOE's decision that some responsibilities and resources of HSS and prior oversight offices more appropriately reside in the program offices. For example, DOE decided in 1999 to eliminate independent oversight personnel at its sites because they were deemed redundant and less effective than oversight by the program offices. DOE also decided in forming HSS in 2006 that its involvement in reviewing facility safety basis documents was not necessary because this is done by the program offices and adequately assessed by HSS during periodic site inspections. Moreover, DOE views HSS's role as secondary to the program offices in addressing recurring nuclear safety violations. Nearly all these shortcomings are in part caused by DOE's desire to strengthen oversight by the program offices, with HSS providing assistance to them in accomplishing their responsibilities. In the absence of external regulation, DOE needs HSS to be more involved in nuclear safety oversight because a key objective of independent oversight is to avoid the potential conflicts of interest that are inherent in program office oversight.
Bankruptcy is a federal court procedure designed to help both individuals and businesses eliminate debts they cannot fully repay as well as help creditors receive some payment in an equitable manner. Individuals usually file for bankruptcy under one of two chapters of the Bankruptcy Code. Under Chapter 7, the filer’s eligible nonexempt assets are reduced to cash and distributed to creditors in accordance with distribution priorities and procedures set out in the Bankruptcy Code. Under Chapter 13, filers submit a repayment plan to the court agreeing to pay part or all of their debts over time, usually 3 to 5 years. Upon the successful completion of both Chapter 7 and 13 cases, the filer’s personal liability for eligible debts is discharged at the end of the bankruptcy process, which means that creditors may take no further action against the individual to collect any unpaid portion of the debt. Most debtors who file for bankruptcy use an attorney, but some debtors represent themselves without the aid of an attorney and are referred to as pro se debtors. The bankruptcy system is complex and involves entities in both the judicial and executive branches of government (see fig. 1). Within the judicial branch, 90 federal bankruptcy courts have jurisdiction over bankruptcy cases. The Administrative Office of the United States Courts (AOUSC) serves as the central support entity for federal courts, including bankruptcy courts, providing a wide range of administrative, legal, financial, management, and information technology functions. The Director of AOUSC is supervised by the Judicial Conference of the United States, the judiciary’s principal policy-making body. Within the executive branch, the Trustee Program, a component of the Department of Justice, is responsible for overseeing the administration of most bankruptcy cases. The program consists of the Executive Office for U.S. Trustees, which provides general policy and legal guidance, oversees operations, and handles administrative functions, as well as 95 field offices and 21 U.S. Trustees—federal officials charged with supervising the administration of federal bankruptcy cases. The Trustee Program appoints and supervises approximately 1,400 private trustees, who are not government employees, to administer bankruptcy estates and distribute payments to creditors. The Bankruptcy Abuse Prevention and Consumer Protection Act of 2005 was signed into law on April 20, 2005, and most of its provisions became effective on October 17, 2005. The following are among the most significant changes the act made with respect to consumer bankruptcies: Means test. The act established a new means test to determine whether a debtor is eligible to file under Chapter 7. If a debtor’s current monthly income minus allowable living expenses exceeds certain thresholds, a Chapter 7 petition is presumed to be abusive and the debtor may have to file under Chapter 11 or under Chapter 13 (which requires repayment of at least a portion of outstanding debt over a period of several years under a court-approved plan) or receive no bankruptcy relief at all. Credit counseling and debtor education. The act created certain counseling and education requirements for filers. To be a “debtor” (that is, eligible to file for bankruptcy), an individual, except in limited circumstances, must receive credit counseling from a provider approved by the Trustee Program (or the bankruptcy administrator, if applicable). In addition, prior to discharge of debts, debtors must complete a personal financial management instructional course—typically referred to as debtor education—from an approved provider. Debtor audits. The act required that procedures be established for independent audit firms to audit bankruptcy petitions, schedules, and other information in consumer bankruptcy cases filed on or after October 20, 2006. The act specified that the procedures should include random audits of at least one out of every 250 bankruptcy cases in each judicial district, as well as additional audits of cases with incomes or expenditures above certain statistical norms. New reporting and data collection requirements. The act required that the judiciary collect certain new aggregate statistics and report on them annually beginning no later than July 1, 2008. The act also required that the Attorney General—who delegated the authority to the Trustee Program—draft rules requiring private trustees to submit uniform final reports on individual bankruptcy cases that include certain specified information about the case. The Bankruptcy Reform Act was enacted, in part, to address certain factors viewed as contributing to an escalation in bankruptcy filings. As shown in figure 2, consumer bankruptcy filings in the United States more than doubled between 1990 and 2004, with an average of more than 1.5 million people filing annually between 2001 and 2004. In the months leading up to the effective date of the act (October 17, 2005), bankruptcy filings rose dramatically because many consumers believed it would be more difficult to receive bankruptcy protection once the act went into effect. Immediately after the act went into effect, filings fell substantially. Although filings have been rising since that time, they are still well below historic levels, with about 823,000 Chapter 7 and Chapter 13 consumer bankruptcies reported in calendar year 2007. The Trustee Program estimated its costs related to carrying out responsibilities resulting from the Bankruptcy Reform Act to be approximately $72.4 million in fiscal years 2005-2007, mostly in personnel costs, to implement the means test and credit counseling and debtor education requirements, conduct debtor audits, comply with reporting requirements, establish information technology systems, and expand facilities. The federal judiciary could not isolate costs specifically resulting from the Bankruptcy Reform Act since the act had a broad effect on nearly all bankruptcy court staff and operations, but did estimate that $48.4 million was incurred in one-time costs associated with start-up activities to implement the act’s requirements. The largest of these expenses related to necessary revisions of the Bankruptcy Rules, official forms, and court operating procedures. The cost estimates for the Trustee Program and the judiciary do not incorporate the effect of the decline in bankruptcy filings since the act, which presumably has helped reduce their overall costs to some extent. However, this decline in filings also has resulted in some reduction in fee revenues for the Trustee Program and the judiciary. Based on estimates developed at our request, the Trustee Program allocated approximately $72.4 million in fiscal years 2005 through 2007 to carry out responsibilities resulting from the Bankruptcy Reform Act. The majority of these costs represented staff time dedicated to new tasks required by the act. In some cases, the Trustee Program hired new staff— including 156 bankruptcy analysts, attorneys, paralegals, and other administrative and information technology personnel hired as of October 1, 2007—to fulfill new responsibilities. In other cases, the program reallocated the time and responsibilities of existing staff to meet the requirements of the act. While the scope of this report is largely limited to describing costs incurred through fiscal year 2007, many or most of those costs are for ongoing tasks that will continue in fiscal year 2008 and beyond. These cost estimates are approximate for two major reasons. First, the Bankruptcy Reform Act had a broad impact on the agency’s overall operations, and thus it is difficult to isolate staff time devoted specifically to elements of the act. Second, although the cost of overseeing each bankruptcy filing may have increased, to some extent this has been offset by the significant decline in the number of bankruptcy filings following the act, and the net effect on overall costs is difficult to measure. As shown in table 1, the Trustee Program’s most significant costs resulting from the Bankruptcy Reform Act for fiscal years 2005 through 2007 were related to the means test ($42.5 million), credit counseling and debtor education requirements ($6.1 million), debtor audits ($3.0 million), studies and reporting requirements ($5.6 million), information technology ($13.7 million), and facilities expansion ($1.5 million). Means test. As of October 1, 2007, the Trustee Program had hired 127 new staff for duties related to the means test, including attorneys who litigate cases and paralegals, bankruptcy analysts, and legal clerks who review the bankruptcy petition, supporting forms, and financial materials filed by every individual debtor in a Chapter 7 case to identify whether the case is “presumed abusive.” This involves an initial review of each debtor’s income, a more thorough review of debtors with income exceeding the state median, and any related litigation. The program estimated it allocated $15.76 million in fiscal year 2006 and $26.7 million in fiscal year 2007 to implementing the means test. Credit counseling and debtor education. The Trustee Program established a separate unit responsible for developing application forms and procedures, approving and monitoring approved credit counseling and debtor education agencies, and taking steps to help ensure that filers were meeting the new requirements. The program initially used detailees from field offices to staff this unit until permanent staff could be hired. The program estimated its costs related to credit counseling and debtor education to be approximately $6.1 million for fiscal years 2005 through 2007. Debtor audits. The Trustee Program had to develop procedures for the audits described in the act. The program contracted with and supervised six third-party auditors, who completed nearly 4,000 debtor audits during fiscal year 2007. The program obligated $2.6 million in fiscal year 2007 for audit contracts. The Trustee Program estimated that staff time allocated to developing audit procedures and overseeing contractors cost $160,000 in fiscal year 2006 and $280,000 in fiscal year 2007. Studies and reporting requirements. The Trustee Program estimated the costs of the act’s various studies and reporting requirements—which include reports on the results of debtor audits and a study of the effectiveness of debtor education— to have been approximately $263,363 in fiscal year 2005, $3.15 million in fiscal year 2006, and $2.21 million in fiscal year 2007. Information technology. The Trustee Program created several new data systems—including the Means Test Review Management System, Credit Counseling/Debtor Education Tracking System, and Debtor Audit Management System—and modified or updated several others. According to Trustee Program officials, these efforts cost $1.9 million in fiscal year 2005, $7.2 million in fiscal year 2006, and $4.6 million in fiscal year 2007. Facilities expansion. To accommodate the additional staff hired as a result of the act, the Trustee Program expanded numerous offices. The expansion involved one-time build-out costs, for which the Trustee Program spent $1.42 million in fiscal year 2006 and $69,863 in fiscal year 2007. The Bankruptcy Reform Act had a significant effect on the operations of AOUSC and the bankruptcy courts. However, unlike the Trustee Program, where the act resulted in several discrete new functions and tasks, the impact on the judiciary has been more diffuse. In congressional testimony, a representative of the Judicial Conference noted that the act created new docketing, noticing, and hearing requirements that make addressing bankruptcy cases more complex and time-consuming. In its fiscal year 2008 congressional budget justification, the judiciary estimated that as a result of the Bankruptcy Reform Act, it takes at least 10 percent more time to process a bankruptcy case. New or expanded tasks relate to additional petition documents, an increased number of motions and hearings, and new procedures associated with such things as rent deposits, tax return filings, and petitions to waive filing fees. Because of the broad impact the Bankruptcy Reform Act has had on bankruptcy court staff and operations—affecting nearly all aspects of court operations and staff responsibilities and tasks—AOUSC could not readily differentiate costs resulting from the act (“new costs”) from those costs incurred in everyday operations. Therefore, it did not provide us with estimates of the costs associated with any additional staff time needed to process a case resulting from the act. Further, as noted earlier, it is difficult to determine the extent to which new costs related to the act may be offset by overall cost savings associated with the decline in bankruptcy filings following the act. However, at our request, AOUSC did estimate that as of December 2007, $48.4 million was incurred for specific start-up activities to implement the act, which included $47.2 million in staff time and $1.2 million for travel, equipment, and contractors. As shown in table 2, these costs were incurred for the following functions: Revision of rules, forms, and procedures. The judiciary estimated that it spent approximately $32.5 million revising the Bankruptcy Rules, official forms, and court operating procedures to reflect provisions of the Bankruptcy Reform Act. About 98 percent of this amount was attributed to staff time and the remainder to travel and other expenses related to changes in the courts’ case management system. Training and communication to courts. The judiciary estimated that it spent about $7.3 million to disseminate information on changes made by the act—through training and other means—to judges, clerks, bankruptcy administrators, and other personnel. The judiciary used broadcasts over the Federal Judicial Television Network, conference calls, national workshops and conferences, and the Internet to conduct training and make the information available. About 98 percent of the costs related to training and communication was for staffing. Bankruptcy administrator responsibilities. As noted earlier, in the six judicial districts in North Carolina and Alabama, the bankruptcy administrator program, rather than the Trustee Program, oversees the administration of bankruptcy cases. AOUSC estimated that the bankruptcy administrators’ offices incurred an estimated $3.6 million in expenses for activities similar to those described above for the Trustee Program. Statistical and reporting responsibilities. The judiciary spent about $2.8 million—88 percent for staffing costs—on statistical and reporting responsibilities, which required revisions to the courts’ electronic filing, docketing, and case management system. To prepare its annual statistical reports, the judiciary modified its electronic database and statistical infrastructure, reprogrammed software to accept new data elements, and prepared additional tables to conform to the statistical reporting required by the act. The judiciary also prepared several reports required by the act, including a report to Congress outlining the courts’ procedures for safeguarding the confidentiality of filers’ tax information. Other items. The judiciary spent an estimated $2 million on other activities related to the implementation of the act, of which about 98 percent was for staffing costs. These activities included revisions to studies to determine staffing needs and the revision and updating of publications and manuals for external parties. Revenues to the Trustee Program and federal judiciary from bankruptcy filing fees and other fees have declined since the implementation of the Bankruptcy Reform Act due to the reduction in the number of bankruptcy filings. Since 1997, the Trustee Program has been entirely self-funded from a portion of the filing fees paid by bankruptcy debtors, which are deposited in the U. S. Trustee System Fund. As shown in figure 3, the Trustee Program’s filing fee revenues (excluding Chapter 11 quarterly fees) have declined since the Bankruptcy Reform Act—from $68 million and $74 million in fiscal years 2004 and 2005, respectively, to $58 million and $52 million in fiscal years 2006 and 2007. The Bankruptcy Reform Act and subsequent budget legislation increased bankruptcy filing fees, as discussed later in this report. In addition, the Bankruptcy Reform Act changed the portion of the filing fee allocated to various parties. The net effect was that the amount received by the Trustee Program for each Chapter 7 filing increased from $42.50 to $89 while the amount received by the program for each Chapter 13 filing remained unchanged at $42.50. However, the decline in the number of consumer bankruptcy filings since the implementation of the act offset the increase in revenue per Chapter 7 case. As we discussed previously, the number of filings in 2006 and 2007 was less than half the annual number of filings in the years just prior to the act. To a more limited extent, Trustee Program revenues also have been affected by a provision of the act that allows the court to waive the Chapter 7 filing fee for debtors below certain income thresholds. Chapter 7 filing fees were waived for 2.1 percent of cases in fiscal year 2007, according to data provided by AOUSC. The Trustee Program may expend the funds in the U.S. Trustee System Fund as appropriated by Congress. In its annual budget request to Congress, the Trustee Program provides an estimate of its filing fee revenues, based on the anticipated number of bankruptcy filings. In years where the actual amount of fee revenues deposited in the U.S. Trustee System Fund is greater than the amount appropriated for that year, the excess fee revenue remains in the fund and is available until expended. Accordingly, in years where the actual amount of fee revenues falls short of the amount appropriated for that year, the program may draw down monies from the fund. In fiscal years 2006 and 2007, the program drew down about $44 million and $92 million, respectively, from the U.S. Trustee System Fund, with congressional approval, to allow the program to operate at appropriated levels. In its 2009 budget request, the Trustee Program stated it expected bankruptcy filings to increase in the coming years and estimated its fee revenues would rise to approximately $70 million and $83 million for fiscal years 2008 and 2009, respectively. Funding for the federal judiciary comes from appropriations that are funded from filing and other fees, as well as “carry forward” balances from prior years. The judiciary receives revenues from a portion of the fee charged for filing a bankruptcy petition, as well as from certain administrative fees and fees charged for filing certain motions. The portion of the statutory filing fee received by the judiciary for each Chapter 7 bankruptcy petition increased from $52.50 to $63.51 and the portion received for each Chapter 13 petition remained unchanged at $52.50. In addition, the “miscellaneous administrative fee” paid to the courts by debtors in all bankruptcy cases remained at $39. However, as with the Trustee Program, the decline in the number of bankruptcy filings (and to a lesser extent the provision allowing fee waivers in a limited number of cases) resulted in a reduction in the judiciary’s overall bankruptcy fee revenues. As shown in figure 4, the judiciary’s bankruptcy-related fee revenues declined from $221 million and $237 million in fiscal years 2004 and 2005, respectively, to $168 million and $135 million in fiscal years 2006 and 2007. According to an AOUSC official, the reduction in bankruptcy fee revenues is offset by increases in appropriated funds. AOUSC officials have estimated that fee revenues will be $158 million in fiscal year 2008 and $172 million in fiscal year 2009. Based on our sample of bankruptcy files, we estimate that the average attorney fee for a Chapter 7 case has increased roughly 50 percent since the Bankruptcy Reform Act. The proportion of Chapter 7 debtors filing without attorney representation (pro se) appears to have declined, but we did not find a change in the proportion of Chapter 7 debtors receiving free legal assistance. For Chapter 13 cases, our analysis found the standard attorney fees that individual courts approve rose in nearly all the districts and divisions with such fees that we reviewed. Due to changes made by the Bankruptcy Reform Act and the Deficit Reduction Act of 2005, bankruptcy filing fees have risen by $90 and $80 for Chapter 7 and Chapter 13 filers, respectively. Fees related to the new credit counseling and debtor education requirements typically total about $100. Most debtors hire an attorney when seeking bankruptcy relief, and bankruptcy attorneys typically charge a fixed fee to handle a consumer bankruptcy case. Anecdotal evidence from a variety of stakeholders— including organizations representing bankruptcy attorneys, private trustees, and consumers—indicated that legal fees associated with seeking consumer bankruptcy relief have risen significantly since the effective date of the Bankruptcy Reform Act. According to bankruptcy attorneys and other parties involved in the process, significantly more legal work is required to meet the requirements of the new law. For example, satisfying the new means test for a bankruptcy filing requires completing a lengthy form that includes various calculations of the debtor’s income and expenses. Attorneys also must collect additional documents from the debtor—such as pay stubs and tax returns—to satisfy new documentation requirements, and ensure compliance with new provisions related to credit counseling and domestic support obligations. Bankruptcy cases since the act typically have involved a greater number of motions and hearings, according to AOUSC officials, which further can increase the time an attorney spends on a case. Finally, new provisions in the act require attorneys to attest to the accuracy of information in bankruptcy petitions. Some parties have said that concerns about increased liability may have affected legal costs, but others have said this has not been a significant factor. To estimate how legal fees for Chapter 7 consumer bankruptcy cases may have changed since the implementation of the Bankruptcy Reform Act, we reviewed disclosures of legal fees contained in a nationwide random sample of 468 Chapter 7 consumer bankruptcy filings. Our sample included 176 cases filed in February and March 2005—prior to the act’s enactment—and 292 cases filed in February and March 2007—more than 15 months after the act went into effect. The fee disclosure form that we reviewed does not necessarily constitute a full or final accounting of compensation actually paid, but rather states the amount the attorney agreed to accept. However, bankruptcy attorneys, private trustees, and representatives of AOUSC and the National Association of Consumer Bankruptcy Attorneys with whom we spoke told us that the fee amount in these disclosures typically represents the actual amount paid by the debtor. As shown in figure 5, on the basis of our sample we estimate that the average attorney fee in Chapter 7 consumer bankruptcy cases was $712 in February–March 2005 and $1,078 in February–March 2007. The average fee therefore increased by $366—or 51 percent—during this 2-year period. (These averages include only cases in which the debtor paid an attorney; they exclude those cases in which the debtor filed without an attorney or received legal assistance at no charge. We discuss pro se and pro bono cases later in this report.) Within each time period, the attorney fees showed considerable variability, but the increase in fees was evident across all fee ranges. For cases filed in February–March 2005, the fee was less than $750 in 59 percent of cases, from $750 to $999 in 27 percent of cases, and $1,000 or more in 14 percent of cases. For cases filed in February–March 2007, the fee was less than $750 in 20 percent of cases, from $750 to $999 in 28 percent of cases, and $1,000 or more in 52 percent of cases. Further, the fee exceeded $1,499 in 18 percent of cases in the 2007 time frame, as compared with 3 percent of cases in the 2005 time frame. Figure 6 illustrates the estimated frequency of these attorney fees. To determine the impact of the Bankruptcy Reform Act on legal fees paid for Chapter 13 bankruptcy cases, we collected and analyzed information on how standard attorney fees have changed since the effective date of the act. These fees—which often are also referred to as either “presumptively reasonable” or “no-look” fees—are fee amounts that individual courts have predetermined as reasonable compensation to an attorney representing a Chapter 13 debtor. An attorney who seeks to collect a fee up to that predetermined amount does not need to apply for court approval of the fee. Such fees are used widely throughout the country for Chapter 13 cases and can be uniform across an entire judicial district or can vary by division or individual judge. According to many of the participants with whom we spoke—including attorneys, private trustees, and court personnel—in locations with an established fee, that amount represents the actual fee attorneys charge Chapter 13 bankruptcy filers in the majority of cases. We collected information on the standard fees in place before and after the Bankruptcy Reform Act in 48 districts or divisions that collectively accounted for 65 percent of Chapter 13 filings in fiscal year 2007. For each of these districts or divisions, we gathered data on the amount of the standard fee, if any, as of (1) October 2005, just prior to the effective date of the Bankruptcy Reform Act; and (2) February 2008, which was more than 2 years after the act had been in effect. Of the 48 districts or divisions we reviewed, 42 had court-set standard fees as of October 2005 and 41 had them as of February 2008. Our analysis found that the Chapter 13 standard fee had increased in nearly all the districts and divisions with such fees. In more than half of those districts and divisions, the increase was 55 percent or more. As shown in figure 7, just prior to implementation of the act, standard fees ranged from $1,500 to $3,000 (with a median of $2,000). As of February 2008, the standard fees ranged from $1,800 to $4,000 (with a median of $3,000). (See app. II for the full list of standard fees in these selected districts and divisions.) Several of the local rules and administrative orders that raised the standard fees specifically cited the Bankruptcy Reform Act as the reason for the change. For example, one order noted that the act’s amendments “have had a material effect on the amount of time attorneys must devote to the representation of a Chapter 13 debtor” and that “many tasks which formerly might have been delegated to [nonattorney professionals, such as a paralegal] must now be handled personally by an attorney.” Similarly, several of the Chapter 13 trustees with whom we spoke told us that the standard fees were increased as a direct result of the act, which had increased the average amount of time an attorney spent on each case. Although legal fees associated with seeking consumer bankruptcy relief have risen since the Bankruptcy Reform Act went into effect, in some cases creditors rather than debtors bear the true financial costs of the fee increase. For example, in many Chapter 13 cases, debtors enter a repayment plan in which only part of their total debt is paid to creditors and the rest is discharged. Approved claims for Chapter 13 attorneys’ fees are paid out of the debtor’s estate as an administrative claim—which are to be paid before most unsecured claims. As a result, in a Chapter 13 bankruptcy case with a partial repayment plan, it may be the unsecured creditors rather than the debtor who absorb the cost of higher attorney fees. According to data from AOUSC, 6.3 percent of Chapter 13 cases and 5.9 percent of Chapter 7 cases were filed pro se (without an attorney) in calendar year 2007, which was the first year that the agency collected complete data on pro se filings. The proportion of bankruptcy cases filed pro se varied substantially across judicial districts. For example, fewer than 2 percent of Chapter 7 cases were filed pro se in 25 districts, while more than 10 percent were filed pro se in another 16 districts. Some bankruptcy attorneys, consumer advocates, and bankruptcy court staff told us that based on anecdotal evidence, they believed that the overall proportion of bankruptcy petitioners filing pro se had increased since the Bankruptcy Reform Act, in large part because increases in legal fees made hiring an attorney less affordable. However, data from our sample of Chapter 7 consumer case files and from AOUSC suggest that the proportion of Chapter 7 bankruptcy cases filed pro se may actually have declined since the act. We estimate that 11 percent of Chapter 7 consumer cases were filed pro se in February–March 2005, compared with the 5.9 percent of Chapter 7 cases that AOUSC reported were filed pro se in calendar year 2007. Debtors who file for bankruptcy without an attorney sometimes use the services of a nonattorney “bankruptcy petition preparer” to assist them in filing the petition. Of the 19 cases filed pro se in our sample of Chapter 7 filings in February–March 2005, 15 were prepared by a nonattorney petition preparer; fee information was available for 9 of those cases and the average fee was $179. Of the nine cases filed pro se in our sample of Chapter 7 filings in February–March 2007, seven were prepared by a non- attorney petition preparer and the average fee was $302. (Because of the small sample size, these figures cannot be projected beyond the sample to all Chapter 7 petition preparer fees.) Various local legal services providers throughout the country employ staff attorneys who assist clients or match clients with private attorneys who volunteer their time to provide legal services at a discount or at no cost (pro bono). We spoke with providers at five agencies that provide legal services to bankruptcy filers, as well as a representative of the American Bar Association’s Center for Pro Bono, about the effect the Bankruptcy Reform Act has had on the availability of pro bono services. In general, they said that fewer attorneys have been willing to volunteer their services to assist bankruptcy filers since the act went into effect, largely due to the increased time and responsibilities required to handle a bankruptcy case. As a result, clients must sometimes wait longer for a referral and one agency noted it had reduced the number of clients for whom it provided pro bono assistance. We did not find a statistically significant difference in the proportion of Chapter 7 bankruptcy filers receiving free legal services since implementation of the Bankruptcy Reform Act. We estimate that 2.8 percent of filers received free legal services in February–March 2005, compared with 4.5 percent of cases filed in February–March 2007. (Additional filers may have received legal services at a discounted fee.) These findings do not necessarily contradict the anecdotal evidence that fewer attorneys may be offering pro bono bankruptcy services, because the decline in the number of bankruptcy filings since the act may diminish the effect of the reduced supply of such services. As shown in tables 3 and 4, as a result of changes made in the Bankruptcy Reform Act and the subsequent Deficit Reduction Act of 2005, the total fees paid at the time of filing a bankruptcy petition under Chapter 7 rose from $209 to $299—an increase of $90. The total fees paid for cases under Chapter 13 rose from $194 to $274—an increase of $80. The total fees paid to file for bankruptcy protection include both statutory fees and “miscellaneous” fees, which are set by the Judicial Conference of the United States pursuant to statutory authority. The Bankruptcy Reform Act, as amended, increased the statutory filing fee from $155 to $220 for Chapter 7 cases and decreased the statutory filing fee from $155 to $150 for Chapter 13 cases. Subsequently, the Deficit Reduction Act, which was signed into law on February 8, 2006, raised these statutory filing fees from $220 to $245 for Chapter 7 cases and from $150 to $235 for Chapter 13 cases. The “miscellaneous administrative fee” of $39 paid by all filers and the “miscellaneous fee for Chapter 7 trustees” of $15 paid by filers in a Chapter 7 case were not affected by either piece of legislation. However, the Bankruptcy Reform Act also contains a provision that allows the bankruptcy court to waive the filing fee in a Chapter 7 filing if the court determines that the filer has (1) an income of less than 150 percent of the income official poverty line (as defined in the Bankruptcy Code), and (2) the debtor is unable to pay the fee in installments. Prior to the Bankruptcy Reform Act, bankruptcy courts had no authority to waive filing fees. Courts waived Chapter 7 filing fees in 2.1 percent of cases filed during fiscal year 2007, according to data provided by AOUSC. As noted earlier, the Bankruptcy Reform Act required that individuals receive credit counseling before filing for bankruptcy and take a debtor education course before having debts discharged. Information from a variety of sources indicates that most providers charge around $50 each, or slightly less, for the required credit counseling and debtor education sessions—a total of about $100 to fulfill both requirements. During the summer of 2007, the Trustee Program’s Credit Counseling and Debtor Education Unit collected and analyzed fee information from agencies approved to provide prefiling credit counseling and predischarge debtor education. The unit’s review found that the median fee for credit counseling was $50 for an individual and $50 for a couple among the 156 approved credit counseling providers that charged a fee and for whom data were available. An additional three credit counseling providers charged no fee. For debtor education, the reports indicated that the median fee was $50 for an individual and $55 for a couple for 81 approved debtor education providers that charged a fee and for whom data were available. An additional 20 debtor education providers charged no fee. The National Foundation for Credit Counseling, which periodically collects fee data from its members, reported similar findings. The average prefiling credit counseling fee charged by the 68 member agencies that provided data to the National Foundation for Credit Counseling was $46.05 during the period from July 1 to September 30, 2007. Further, in our April 2007 report on credit counseling and debtor education, we reported that each of three largest providers of prefiling credit counseling—which together had issued about half of all certificates as of October 2006—charged exactly $50 for an individual credit counseling or debtor education session. In a few cases, we identified smaller counseling and education providers with higher fees, such as $75 per session. The Bankruptcy Reform Act requires that in order to become an approved provider of credit counseling or debtor education, any fee charged by such provider must be reasonable. However, the act did not specify criteria for determining whether a fee amount is “reasonable.” On February 1, 2008, the Trustee Program’s proposed procedures and criteria to be used by the program to approve credit counseling agencies were published. The proposed rule provides that a fee of $50 or less for credit counseling services would be presumed to be reasonable, and that an agency seeking to be an approved provider must obtain prior approval from the Trustee Program in order to charge a fee of more than $50. Trustee Program officials told us that a separate proposed rulemaking covering debtor education agencies was forthcoming. The Bankruptcy Reform Act also required that credit counseling and debtor education providers offer their services without regard to the client’s ability to pay. Based on the periodic activity reports submitted by providers to the Trustee Program in 2006 and 2007, approximately 11 percent and 13 percent of clients had their fees waived for credit counseling and debtor education, respectively, and an additional 28 percent and 19 percent of clients received a partial reduction of the fee. Similarly, the National Foundation for Credit Counseling provided us with data showing that among member agencies surveyed, the fee for prefiling credit counseling was waived about 18 percent of the time between July, 1, 2007 and September 30, 2007. Our April 2007 report noted that the policies of individual providers for waiving fees varied. Trustee Program data on the three largest providers showed significant variations in the proportions of clients whose fees were waived—from 4 percent to 26 percent for counseling sessions and from 6 percent to 34 percent for debtor education courses. As a result, our report recommended that the Trustee Program issue formal guidance on what constitutes a client’s “ability to pay.” In its proposed rule of February 1, 2008, the Trustee Program stated that the client shall be deemed unable to pay, and thereby entitled to a fee waiver, if the client’s household income is less than 150 percent of the poverty line as defined by the Office of Management and Budget. The Bankruptcy Reform Act has affected the responsibilities of Chapter 7 and Chapter 13 private trustees, largely as a result of new documentation, verification, and reporting requirements. The trustees with whom we spoke said the act significantly increased the amount of staff time needed to administer a bankruptcy case. The caseloads of many Chapter 7 and Chapter 13 trustees have declined since the act in concert with the decline in bankruptcy filings. However, as yet, the overall compensation to trustees collectively has not declined significantly because disbursements and repayments are still being made from the surge in bankruptcy filings that occurred just prior to the effective date of the act. Further, according to data provided by the Trustee Program, attrition among trustees has not changed significantly since the implementation of the act. The Bankruptcy Reform Act has affected the responsibilities of Chapter 7 and Chapter 13 private trustees, largely as a result of new documentation, verification, and reporting requirements. As noted earlier, private trustees—individuals who are not government employees and are overseen in most districts by the Trustee Program—administer individual Chapter 7 and Chapter 13 bankruptcy cases. Chapter 7 trustees identify the debtor’s available assets, liquidate them (turn them into cash), and distribute the proceeds to creditors. Chapter 13 trustees administer cases according to a court-approved plan for the repayment of debt, collecting payments from the debtor and making distributions to creditors. One of the key responsibilities for both Chapter 7 and Chapter 13 trustees is to preside over the meeting of creditors (commonly known as the “341 meeting”), in which the debtor must appear and answer questions under oath from the trustee and creditors. In addition, trustees collect, review, and verify the information in the bankruptcy petition and the supporting documentation that lists the debtor’s assets, liabilities, income, and expenditures. This ensures that exemptions are accurately claimed and that assets that can be liquidated are distributed to creditors. The provisions of the Bankruptcy Reform Act with the most significant impact on the duties of the private trustees for personal bankruptcy cases are the following: New documentation requirements. Trustees must confirm that debtors have submitted documentation required under the act, which includes 2 months of wage statements and the tax return from the year prior to filing. The trustees must safeguard all tax return documents according to procedures set by the Trustee Program—for example, access to tax records must be restricted and sensitive documents must be properly secured, destroyed, or returned to the debtor. Domestic support obligations. In cases where a debtor has a domestic support obligation—alimony or child support—private trustees must notify the claimant (such as the custodial parent) and the relevant state child support enforcement agency of the bankruptcy. The trustee must notify applicable parties twice during the bankruptcy process—once around the time of the meeting of the creditors and once at the time of discharge. Means test. Chapter 7 trustees must review the means test form submitted by debtors and verify the calculation of current monthly income. In those cases where the income is below the state median—and therefore not presumed abusive—the trustees are to verify that the income is truly below the median by examining wage statements and tax documents. Chapter 13 trustees use the means test form—in conjunction with other documents, such as tax returns—to determine what the debtor can afford to pay each month in a repayment plan. Uniform final reports. Once the Trustee Program issues a final rule, private trustees will be required to submit a uniform final report of each bankruptcy case. For Chapter 7 trustees, the proposed reporting forms add additional responsibilities since they require reporting data not currently collected for no-asset cases, and they must enter this information manually. Chapter 13 trustees already submit final reports, although the proposed new forms require some additional information they must collect, such as assets abandoned. The Bankruptcy Reform Act has affected the time and resources required by trustees to administer bankruptcy cases, according to private trustees and representatives of the Trustee Program. We spoke with, collectively, 18 Chapter 7 and Chapter 13 trustees, as well as organizations representing them, about how the act has affected their work. While the experiences of individual trustees varied, all said that the act increased the amount of staff time it took to administer a bankruptcy case, with many reporting that the staff time needed per case roughly doubled. For example, trustees told us they require additional administrative and clerical support to help collect and track newly required documents, such as tax returns and wage statements. There also are costs associated with printing, storing, securing, and shredding these documents. The trustees also told us that the means test significantly increased the time spent reviewing documentation. In addition, while individual experiences varied, Chapter 7 and Chapter 13 trustees typically told us that the 341 meetings were taking longer, in part due to more questions about the documents submitted; additional time also is sometimes required to determine the addresses for notifying child support claimants for the domestic support obligations. Furthermore, the 341 meetings have been postponed more frequently because of debtors’ delays in gathering the required documentation. In addition, according to the Trustee Program’s notice of proposed rule making, the new uniform final reports will require Chapter 7 trustees to spend an estimated 10 additional minutes per case to collect and input newly required information, potentially adding $2,100 a year in increased costs. Finally, a representative of the National Association of Chapter 13 Trustees noted that trustees have been required to make significantly more court appearances as a consequence of the additional hearings and litigation that have resulted from the Bankruptcy Reform Act. The caseload of Chapter 7 trustees has declined significantly since the Bankruptcy Reform Act in concert with the decline in filings—from 1.2 million personal and business Chapter 7 bankruptcy filings in fiscal year 2004 to about 484,000 in fiscal year 2007. Chapter 7 trustees are unsalaried and typically work part time in their trustee duties. They collect a fee of $60 for each case they administer and this amount remained unchanged with the passage of the Bankruptcy Reform Act. In addition, as noted earlier, a provision of the act allows the court to waive the filing fee for qualified Chapter 7 debtors, and for these cases the trustee receives no compensation at all. In addition, for cases where there are assets to be liquidated, the Chapter 7 trustee receives a percentage—as prescribed by statute—of the assets distributed to creditors, and also may be reimbursed for certain direct expenses. Although about 95 percent of Chapter 7 filings have traditionally been “no- asset” cases with $60 as the trustee’s sole compensation, Chapter 7 trustees derive the majority of their overall revenues from those few cases involving disbursement of assets. It can take several years to completely disburse available assets. As a result, the dramatic surge in bankruptcy filings just prior to the Bankruptcy Reform Act’s October 2005 implementation resulted in an increase in Chapter 7 trustees’ overall compensation from 2005 to 2007, despite the decline in their caseload. According to our analysis of Trustee Program data, in fiscal year 2005, Chapter 7 trustees collectively received $191.7 million in total compensation ($111 million from asset disbursements and an estimated $80.7 million from filing fees), while in fiscal year 2007, they received $212.4 million in total compensation ($183.7 million from asset disbursements and an estimated $28.5 million from filing fees). However, these revenues may decline in future years as assets from cases filed in 2005 are disbursed fully. The caseload for Chapter 13 trustees since the Bankruptcy Reform Act also has declined, although less substantially—from 454,412 personal and business Chapter 13 filings in fiscal year 2005 to 310,802 in fiscal year 2007. In contrast to Chapter 7 trustees, Chapter 13 trustees are full time and typically run offices that employ other full-time staff. Chapter 13 trustees’ compensation is based—up to a preset limit—on a percentage of the total payments made to creditors. The Chapter 13 trustee uses these funds to pay for rent, staff, and certain other office expenses. Most Chapter 13 repayment plans are either 3 years or 5 years in length and, as with Chapter 7 trustees, the surge in filings just prior to the Bankruptcy Reform Act has continued to be a source of revenue for Chapter 13 trustees despite the decline in filings. According to data provided by the Trustee Program, in fiscal year 2005, total compensation to Chapter 13 trustees was $31.02 million, averaging $162,432 per trustee. In fiscal year 2007, total compensation was $31.85 million, averaging $165,870 per trustee. Attrition among Chapter 7 and Chapter 13 trustees has not changed significantly since the implementation of the Bankruptcy Reform Act, according to our analysis of Trustee Program data. This analysis found that the rate of attrition—due to resignations, retirements, or terminations—has stayed consistent at approximately 3 percent to 4 percent over the past several years. Almost all of the private trustees with whom we spoke told us that they were not likely to leave their position, despite the challenges resulting from the Bankruptcy Reform Act. However, a Trustee Program official noted that the program has not always sought to fill vacancies that have occurred since the act because of the decline in filings. We provided a draft of this report to AOUSC and the Department of Justice for comment. These agencies provided technical comments that we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies of this report to the Ranking Member of the Committee on the Judiciary, U.S. Senate; the Ranking Member of the Committee on the Judiciary, House of Representatives; the Director of the Administrative Office of the United States Courts; the Attorney General; and other interested committees and parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-8678 or jonesy@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our report objectives were to examine (1) new costs incurred as a result of the Bankruptcy Abuse Prevention and Consumer Protection Act of 2005 (Bankruptcy Reform Act) by the Department of Justice and the federal judiciary, (2) new costs incurred as a result of the act by consumers filing for bankruptcy, and (3) the impact of the act on private trustees. Our review focused on the impact of the act on personal and not business bankruptcies. Further, the first two objectives examined only the monetary (dollar) costs incurred by federal agencies and consumers and not on other ways that the Bankruptcy Reform Act may have affected them. In addition, the scope of this report is limited to costs directly related to the process of filing for bankruptcy, and not on the overall financial impact the act may be having on consumers. Finally, this report did not seek to assess the benefits of the Bankruptcy Reform Act and is therefore not an evaluation of the merits of the act. To address all of the objectives, we reviewed the relevant provisions of the Bankruptcy Reform Act. We also obtained documentation from, and interviewed representatives of, the Department of Justice’s U.S. Trustee Program (Trustee Program); the federal judiciary, including the Administrative Office of the United States Courts (AOUSC) and selected individual bankruptcy courts; Congressional Budget Office; and organizations representing consumers, including the National Consumer Law Center, and the financial services industry, including the Financial Services Roundtable. To address the first objective on new costs to the federal government, we reviewed relevant budget-related documents. For the Department of Justice’s Trustee Program, these included its actual or projected annual budgets for fiscal years 2005 through 2009, as well as annual budget and performance summaries, strategic plans, annual reports, and congressional testimonies by Trustee Program officials. For the federal judiciary, we reviewed congressional budget justifications for fiscal years 2003 through 2008, as well as annual reports, and congressional testimonies by officials of the Judicial Conference of United States and AOUSC. We also reviewed internal documentation from AOUSC on activities and timelines for implementing requirements of the Bankruptcy Reform Act. Since the budget documentation generally did not identify costs specific to implementation of the Bankruptcy Reform Act, we requested the Trustee Program and federal judiciary to estimate costs to date incurred specifically as a result of the act, including the cost of allocated staff time. To develop its estimates, the Trustee Program primarily used information from its fiscal year 2006 budget justification, which specified funds needed to address specific provisions of the act. For costs for debtor audit contracts, information technology, and facilities expansion—which were largely contract costs—the program provided actual obligations. The cost estimates from the judiciary were specific to a set of one-time activities undertaken to initially implement the Bankruptcy Reform Act and were based on a tracking report developed by AOUSC to monitor its efforts to implement the act. We did not verify the estimates provided to us by the Trustee Program and the federal judiciary, although we reviewed and analyzed them and we interviewed the staff who provided the estimates to understand how they were created. We determined that the estimates were sufficiently reliable for our purposes. The Bankruptcy Reform Act included provisions authorizing new bankruptcy judgeships, but we did not include the costs of these new judgeships because they had been planned prior to and independent of the act. In addition, we collected and analyzed data on the Trustee Program’s and judiciary’s revenues from bankruptcy-related statutory and miscellaneous filing fees. To address the second objective on new costs to consumers, we reviewed changes in attorney fees and filing fees, as well as fees to fulfill the new credit counseling and debtor education requirements. To determine changes in attorney fees for Chapter 7 bankruptcy cases, we selected two random and projectable samples of cases (from before and after the Bankruptcy Reform Act) and collected information on the attorney compensation, if any, disclosed in the case file. From AOUSC’s U.S. Party/Case Index, we selected a random sample of 193 Chapter 7 cases that had been filed nationwide during February or March 2005 and had closed within 272 days from the filing date. We chose this time period because it occurred just before the act was enacted. We selected another random sample of 307 cases filed during February or March 2007 that had closed within 272 days from the filing date. We chose this time period because it was about 16 months after the effective date of the Bankruptcy Reform Act; bankruptcy attorneys with whom we spoke said that most significant changes in attorney fees resulting from the act had occurred by that time. For both timeframes, we included only cases that had closed within 272 days of filing to ensure we did not include cases that were still open at the time of our review. From our sample, we excluded business cases since these were outside the scope of our review. We also excluded cases that had converted from Chapter 13 to Chapter 7 because it would not have been possible to determine the extent to which the attorney fee was based on work related to the Chapter 7 filing. Finally, we excluded cases in which necessary data were not accessible from the electronic file (which represented fewer than 3 percent of cases). With these exclusions, we had an effective sample of 176 Chapter 7 cases from February–March 2005 and 292 cases from February–March 2007. Table 5 summarizes the population and sample disposition for the Chapter 7 filings sample. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (for example, plus or minus 6 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. All percentage estimates in this report based on our sample review of Chapter 7 filings have 95 percent confidence intervals of plus or minus 6 percentage points or less, unless otherwise noted. All numerical estimates other than percentages (for example, estimated mean Chapter 7 fees) have 95 percent confidence intervals of within plus or minus 6.3 percent of the value of those estimates, unless otherwise noted. We performed our case file review using a data collection instrument that included uniform questions to ensure data were collected consistently. For each case, we reviewed the docket and relevant documents from the bankruptcy file to determine (1) the attorney fee, if any, disclosed in Form B203, the Disclosure of Compensation of Attorneys for Debtor(s), and any amendments to that form; (2) whether the attorney represented the debtor at no charge (pro bono); (3) whether the debtor filed without an attorney (pro se); and (4) the bankruptcy petition preparer fee, if any, disclosed in Form B280, the Disclosure of Compensation of Bankruptcy Petition Preparer. We relied on data presented in bankruptcy documents filed with the courts by debtors, creditors, and debtor attorneys and electronically stored in the courts’ Public Access to Court Electronic Records system. Bankruptcy courts and U.S. Trustees manage bankruptcy cases and perform some measures to verify data that help ensure the reliability of information provided in these case files. For example, bankruptcy court officials have measures to ensure that data entered into information systems are accurate. Other measures we used to ensure reliability of these data included relying on our past work using the U.S. Party/Case Index and Public Access to Court Electronic Records and by performing additional steps during our review to compare information between these two systems. For attorney fees for Chapter 13 cases, we collected and analyzed changes since the Bankruptcy Reform Act in standard attorney fees approved by individual judicial districts or divisions—in 48 districts or divisions that collectively accounted for 65 percent of Chapter 13 filings in fiscal year 2007. For each of these districts or divisions, we collected the amount of the standard fee, if any, as of (1) October 2005, just prior to the effective date of the Bankruptcy Reform Act, and (2) February 2008, more than 2 years after the act went into effect. We obtained these data from published local rules or administrative orders, as well as through interviews with relevant Chapter 13 trustees and bankruptcy court personnel. A few districts and divisions had two or more standard fees based on the extent of services provided or the specific characteristics of the case. In such instances, we used the highest fee for both time periods for our analysis, although in one case, we used the mid-level fee because the Chapter 13 trustee told us it was the fee most commonly charged by attorneys in that district. We also collected available data from AOUSC on the number of bankruptcies filed without an attorney (pro se) and spoke with representatives of the National Association of Consumer Bankruptcy Attorneys and the Business Law Pro Bono Project of the American Bar Association’s Center for Pro Bono, and with attorneys at five firms that provide free or reduced-cost legal assistance to bankruptcy filers. To review filing fees, we reviewed changes to these fees made by the Bankruptcy Reform Act, as amended, and the Deficit Reduction Act of 2005, as well as any changes made by the judiciary to nonstatutory fees. We obtained from AOUSC data on the number of cases in which the court waived the filing fee. To determine costs associated with credit counseling and debtor education requirements, we reviewed information in our prior report, Bankruptcy Reform: Value of Credit Counseling Requirement Is Not Clear (GAO-07-203), and reviewed and analyzed additional fee and waiver data provided to us by the Trustee Program. We also reviewed data provided to us by the National Foundation for Credit Counseling that included its members’ fees for prefiling credit counseling. Finally, we interviewed officials from the Trustee Program’s Credit Counseling and Debtor Education Unit and reviewed provisions of the agency’s proposed rule related to credit counseling fees. To address the third objective on private trustees, we reviewed provisions of the Bankruptcy Reform Act that affect private trustees’ roles and responsibilities, as well as the Trustee Program’s interim guidance and policy and procedure manuals for private trustees. We spoke with Trustee Program staff responsible for overseeing trustees and with officials from the National Association of Bankruptcy Trustees and National Association of Chapter 13 Trustees, two professional associations representing Chapter 7 and Chapter 13 trustees, respectively. We also reviewed published materials from the National Association of Bankruptcy Trustees, including a survey conducted of its members on the impact of the Bankruptcy Reform Act. In addition, we conducted individual and small group interviews of 10 Chapter 7 and 11 Chapter 13 private trustees. These trustees were chosen because they served in districts that represented a range of sizes and geographic regions. Finally, we collected and analyzed data from the Trustee Program on attrition rates for private trustees from fiscal years 2003 through 2007. We conducted this performance audit from June 2007 through June 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The “standard fees” provided in table 6 represent standard amounts individual courts approve as reasonable compensation for an attorney representing a Chapter 13 debtor. The districts and divisions shown here collectively accounted for 65 percent of Chapter 13 filings in fiscal year 2007. A few districts and divisions had two or more standard fees. In such cases, the applicable fee is based on the extent of services provided or the specific characteristics of the case, as prescribed by local rules or administrative orders. In addition to the contact named above, Jason Bromberg, Assistant Director; Randy Fasnacht; Cynthia Grant; Carol Henn; Tiffani Humble; Kristeen McLain; Marc Molino; Mark Ramage; Carl M. Ramirez; Omyra Ramsingh; Barbara Roesmann; and Rhonda P. Rose made key contributions to this report.
The Bankruptcy Abuse Prevention and Consumer Protection Act of 2005 (Bankruptcy Reform Act) made significant changes to the administration of bankruptcy relief, affecting (1) the U.S. Trustee Program (Trustee Program), which oversees the bankruptcy process; (2) the federal judiciary, which includes bankruptcy courts and a central administrative support office; (3) consumers filing for bankruptcy; and (4) private trustees--individuals who administer bankruptcy cases and are supervised by the Trustee Program but are not government employees. The number of new personal bankruptcy filings declined after the act--about 600,000 people filed in 2006 as compared to an average of 1.5 million annually between 2001 and 2004. GAO was asked to examine (1) new costs incurred as a result of the Bankruptcy Reform Act by the Trustee Program and federal judiciary, (2) new costs to consumers, and (3) the impact of the act on private trustees. GAO reviewed budget information from the Trustee Program and federal judiciary, and collected data on attorney fees from a random and projectable sample of personal bankruptcy cases. GAO also obtained documentation and interviewed staff from these entities, as well as from organizations representing consumers, bankruptcy attorneys, creditors, and private trustees. The Trustee Program estimated that its costs to carry out responsibilities resulting from the Bankruptcy Reform Act were approximately $72.4 million for fiscal years 2005 through 2007. These costs were mostly for staff time for ongoing activities related to the means test, debtor audits, data collection and reporting, and counseling and education requirements. The federal judiciary could not isolate all costs related to the act since it broadly affected nearly all bankruptcy court staff and operations, but estimated about $48 million was incurred in one-time start-up costs for such things as training and revisions of rules, forms, and procedures. These estimates do not incorporate the effect of the decline in bankruptcy filings since the act, which presumably has helped reduce the Trustee Program's and judiciary's overall costs, but has also reduced fee revenues. Trustee Program filing fee revenues declined from $74 million to $52 million between fiscal years 2005 and 2007, and federal judiciary filing and miscellaneous fee revenues declined from $237 million to $135 million. Consumers filing for bankruptcy pay higher legal and filing fees since the Bankruptcy Reform Act went into effect. Based on a random sample of bankruptcy files, GAO estimated that the average attorney fee for a Chapter 7 case increased from $712 in February-March 2005 to $1,078 in February-March 2007. For Chapter 13 cases, the standard attorney fees that individual courts approve rose in nearly all the districts and divisions with such fees that GAO reviewed, and in more than half the cases the increase was 55 percent or more. As a result of the act and subsequent budget legislation, total bankruptcy filing fees have risen from $209 to $299 for Chapter 7 and from $194 to $274 for Chapter 13. GAO estimated that the proportion of Chapter 7 debtors filing without an attorney had declined and did not find a significant change in the proportion of such debtors receiving free legal assistance. In addition, fees to meet the act's credit counseling and debtor education requirements are typically about $100, although some clients receive a fee reduction or a full waiver. Private trustees told GAO that new Bankruptcy Reform Act requirements related to documentation, verification, and reporting have increased the time and resources they spend administering each case. The caseload of some private trustees has declined in concert with the significant decline in bankruptcy filings that has occurred since the act went into effect, but trustees' overall rate of attrition has not changed significantly.
We found that the VA reprocessing requirements we selected for review are inadequate to help ensure veterans’ safety. Lack of specificity about types of RME that require device-specific training. The VA reprocessing requirements we reviewed do not specify the types of RME for which VAMCs must develop device-specific training. This inadequacy has caused confusion among VAMCs and contributed to inconsistent implementation of training for reprocessing. While VA headquarters officials told us that the training requirement is intended to apply to RME classified as critical—such as surgical instruments—and semi-critical—such as certain endoscopes, officials from five of the six VAMCs we visited told us that they were unclear about the RME for which they were required to develop device-specific training. Officials at one VAMC we visited told us that they did not develop all of the required reprocessing training for critical RME—such as surgical instruments—because they did not understand that they were required to do so. Officials at another VAMC we visited also told us that they had begun to develop device-specific training for reprocessing non-critical RME, such as wheelchairs, even though they had not yet fully completed device-specific training for more critical RME. Because these two VAMCs had not developed the appropriate device-specific training for reprocessing critical and semi-critical RME, staff at these VAMCs may not have been reprocessing all RME properly, which potentially put the safety of veterans receiving care at these facilities at risk. Conflicting guidance on the development of RME reprocessing training. While VA requires VAMCs to develop device-specific training on reprocessing RME, VA headquarters officials provided VAMCs with conflicting guidance on how they should develop this training. For example, officials at three VAMCs we visited told us that certain VA headquarters or VISN officials stated that this device-specific training should very closely match manufacturer guidelines–-in one case verbatim—while other VA headquarters or VISN officials stated that this training should be written in a way that could be easily understood by the personnel responsible for reprocessing RME. This distinction is important, since VAMC officials told us that some of the staff responsible for reprocessing RME may have difficulty following the more technical manufacturers’ guidelines. In part because of VA’s conflicting guidance, VAMC officials told us that they had difficulty developing the required device-specific training and had to rewrite the training materials multiple times for RME at their facilities. Officials at five of the six VAMCs also told us that developing the device-specific training for reprocessing RME was both time consuming and resource intensive. VA’s lack of specificity and conflicting guidance regarding its requirement to develop device-specific training for reprocessing RME may have contributed to delays in developing this training at several of the VAMCs we visited. Officials from three of the six VAMCs told us that that they had not completed the development of device-specific training for RME since VA established the training requirement in July 2009. As of October 2010, 15 months after VA issued the policy containing this requirement, officials at one of the VAMCs we visited told us that device-specific training on reprocessing had not been developed for about 80 percent of the critical and semi-critical RME in use at their facility. VA headquarters officials told us that they are aware of the lack of specificity and conflicting guidance provided to VAMCs regarding the development of training for reprocessing RME and were also aware of inefficiencies resulting from each VAMC developing its own training for reprocessing types of RME that are used in multiple VAMCs. In response, VA headquarters officials told us that they have made available to all VAMCs a database of standardized device-specific training developed by RME manufacturers for approximately 1,000 types of RME and plan to require VAMCs to implement this training by June 2011. The officials also told us that VA headquarters is planning to develop device-specific training available to all VAMCs for certain critical and semi-critical RME for which RME manufacturers have not developed this training, such as dental instruments. However, as of February 2011, VA headquarters had not completed the development of device-specific training for these RME and has not established plans or corresponding timelines for doing so. We found that VA recently made changes to its oversight of VAMCs’ compliance with selected reprocessing requirements; however, this oversight continues to have weaknesses. Beginning in fiscal year 2011, VA headquarters directed VISNs to make three changes intended to improve its oversight of these reprocessing requirements at VAMCs. VA headquarters recently required VISNs to increase the frequency of site visits to VAMCs—from one to three unannounced site visits per year—as a way to more quickly identify and address areas of noncompliance with selected VA reprocessing requirements. VA headquarters also recently required VISNs to begin using a standardized assessment tool to guide their oversight activities. According to VA headquarters officials, requiring VISNs to use this assessment tool will enable the VISNs to collect consistent information on VAMCs’ compliance with VA’s reprocessing requirements. Before VA established this requirement, the six VISNs that oversee the VAMCs we visited often used different assessment tools to guide their oversight activities. As a result, they reviewed and collected different types of information on VAMCs’ compliance with these requirements. VISNs are now required to report to VA headquarters information from their site visits. Specifically, following each unannounced site visit to a VAMC, VISNs are required to provide VA headquarters with information on the facility’s noncompliance with VA’s reprocessing requirements and VAMCs’ corrective action plans to address areas of noncompliance. Prior to fiscal year 2011, VISNs were generally not required to report this information to VA headquarters. Despite the recent changes, VA’s oversight of its reprocessing requirements, including those we selected for review, has weaknesses in the context of the federal internal control for monitoring. Consistent with the internal control for monitoring, we would expect VA to analyze this information to assess the risk of noncompliance and ensure that noncompliance is addressed. However, VA headquarters does not analyze information to identify the extent of noncompliance across all VAMCs, including noncompliance that occurs frequently or poses high risks to veterans’ safety. As a result, VA headquarters has not identified the extent of noncompliance across VAMCs with, for example, VA’s operational reprocessing requirement that staff use personal protective equipment when performing reprocessing activities, which is key to ensuring that clean RME are not contaminated by coming into contact with soiled hands or clothing. Three of the six VAMCs we visited had instances of noncompliance with this requirement. Similarly, because VA headquarters does not analyze information from VAMCs’ corrective action plans to address noncompliance with VA reprocessing requirements, it is unable to confirm, for example, whether VAMCs have addressed noncompliance with its operational reprocessing requirement to separate clean and dirty RME. Two of the six VAMCs we visited had not resolved noncompliance with this requirement and, as a result, are unable to ensure that clean RME does not become contaminated by coming into contact with dirty RME. VA headquarters officials told us that VA plans to address the weaknesses we identified in its oversight of VAMCs’ compliance with reprocessing requirements. Specifically, VA headquarters officials told us that they intend to develop a systematic approach to analyze oversight information to identify areas of noncompliance across all VAMCs, including those that occur frequently, pose high risks to veterans’ safety, or have not been addressed in a timely manner. While VA has established a timeline for completing these changes, certain VA headquarters officials told us that they are unsure whether this timeline is realistic due to possible delays resulting from VA’s ongoing organizational realignment, which had not been completed as of April 6, 2011. In conclusion, weaknesses exist in VA’s policies for reprocessing RME that create potential safety risks to veterans. VA’s lack of specificity and conflicting guidance for developing device-specific training for reprocessing RME has led to confusion among VAMCs about which types of RME require device-specific training and how VAMCs should develop that training. This confusion has contributed to some VAMCs not developing training for their staff for some critical and semi-critical RME. Moreover, weaknesses in oversight of VAMCs’ compliance with the selected reprocessing requirements do not allow VA to identify and address areas of noncompliance across VAMCs, including those that occur frequently, pose high risks to veterans’ safety, or have not been addressed by VAMCs. Correcting inadequate policies and providing effective oversight of reprocessing requirements consistent with the federal standards for internal control is essential for VA to prevent potentially harmful incidents from occurring. To help ensure veterans’ safety through VA’s reprocessing requirements, we are making two recommendations in our report. We recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to take the following actions: Develop and implement an approach for providing standardized training for reprocessing all critical and semi-critical RME to VAMCs. Additionally, hold VAMCs accountable for implementing device-specific training for all of these RME. Use the information on noncompliance identified by the VISNs and information on VAMCs’ corrective action plans to identify areas of noncompliance across all 153 VAMCs, including those that occur frequently, pose high risks to veterans’ safety, or have not been addressed, and take action to improve compliance in those areas. In responding to a draft of the report from which this testimony is based, VA concurred with these recommendations. Chairman Miller, Ranking Member Filner, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have. For further information about this testimony, please contact Randall B. Williamson at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals who made key contributions to this testimony include Mary Ann Curran, Assistant Director; Kye Briesath; Krister Friday; Melanie Krause; Lisa Motley; and Michael Zose. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses patient safety incidents at Department of Veterans Affairs (VA) medical centers and potential strategies to address the underlying causes of those incidents. VA operates one of the largest integrated health care delivery systems in the United States, providing care to over 5.5 million veterans annually. Organized into 21 Veterans Integrated Service Networks (VISN), VA's health care system includes 153 VA medical centers (VAMC) nationwide that offer a variety of outpatient, residential, and inpatient services. In providing health care services to veterans, clinicians at VAMCs use reusable medical equipment (RME), which is designed to be reused for multiple patients and includes such equipment as endoscopes and some surgical and dental instruments. Because RME is used when providing care to multiple veterans, this equipment must be reprocessed--that is, cleaned and disinfected or sterilized--between uses. VA has established requirements for VAMCs to follow when reprocessing RME, which are designed, in part, to help ensure the safety of the veterans who receive care at VAMCs. This testimony, based on our May 2011 report, which is being released today, examines issues related to veterans' safety, including (1) selected reprocessing requirements established in VA policies, based on their relevance to patient safety incidents and (2) VA's oversight of VAMCs' compliance with these selected requirements. In summary, we found that the VA reprocessing requirements we selected for review are inadequate to help ensure the safety of veterans who receive care at VAMCs. Although VA requires VAMCs to develop devicespecific training for staff on how to correctly reprocess RME, it has not specified the types of RME for which this training is required. Furthermore, VA has provided conflicting guidance to VAMCs on how to develop device-specific training on reprocessing RME. This lack of clarity may have contributed to delays in developing the required training. Without appropriate training on reprocessing, VAMC staff may not be reprocessing RME correctly, which poses potential risks to veterans' safety. VA headquarters officials told us that VA has plans to develop training for certain RME, but VA lacks a timeline for developing this training. We also found that despite changes to improve VA's oversight of VAMCs' compliance with selected reprocessing requirements, weaknesses still exist. These weaknesses render VA unable to systematically identify and address noncompliance with the requirements, which poses potential risks to the safety of veterans. Although VA headquarters receives information from the VISNs on any noncompliance they identify, as well as VAMCs' corrective action plans to address this noncompliance, VA headquarters does not analyze this information to inform its oversight. According to VA headquarters officials, VA intends to develop a plan for analyzing this information to systematically identify areas of noncompliance that occur frequently, pose high risks to veterans' safety, or have not been addressed across all VAMCs. To address the inadequacies we identified in selected VA reprocessing requirements, GAO recommends that VA develop and implement an approach for providing standardized training for reprocessing all critical and semi-critical RME to VAMCs and hold VAMCs accountable for implementing this training. To address the weaknesses in VA's oversight of VAMCs' compliance with selected requirements, GAO recommends that VA use information on noncompliance identified by the VISNs and information on VAMCs' corrective action plans to identify areas of noncompliance across all 153 VAMCs and take action to improve compliance in those areas.
Although mutual funds already disclose considerable information about the fees they charge, our report recommends that SEC consider requiring that mutual funds make additional disclosures to investors about fees in the account statements that investors receive. Mutual funds currently provide information about the fees they charge investors as an operating expense ratio that shows as a percentage of fund assets all the fees and other expenses that the fund adviser deducts from the assets of the fund. Mutual funds also are required to present a hypothetical example that shows in dollar terms what investors could expect to pay in fees if they invested $10,000 in a fund and held it for various periods. It is important to understand the fees charged by a mutual fund because fees can significantly affect investment returns of the fund over the long term. For example, over a 20-year period a $10,000 investment in a fund earning 8 percent annually, with a 1-percent expense ratio, would be worth $38,122; but with a 2-percent expense ratio it would be worth $31,117—over $7,000 less. Unlike many other financial products, mutual funds do not provide investors with information about the specific dollar amounts of the fees that have been deducted from the value of their shares. Table 1 shows that many other financial products do present their costs in specific dollar amounts. Although mutual funds do not disclose their costs to each individual investor in specific dollars, the disclosures that they make do exceed those of many products. For example, purchasers of fixed annuities are not told of the expenses associated with investing in such products. Some industry participants and others including SEC also cite the example of bank savings accounts, which pay stated interest rates to their holders but do not explain how much profit or expenses the bank incurs to offer such products. While this is true, we do not believe this is an analogous comparison to mutual fund fees because the operating expenses of the bank are not paid using the funds of the savings account holder and are therefore not explicit costs to the investor like the fees on a mutual fund. A number of alternatives have been proposed for improving the disclosure of mutual fund fees, that could provide additional information to fund investors. In December 2002, SEC released proposed rule amendments, which include a requirement that mutual funds make additional disclosures about their expenses. This information would be presented to investors in the annual and semiannual reports prepared by mutual funds. Specifically, mutual funds would be required to disclose the cost in dollars associated with an investment of $10,000 that earned the fund’s actual return and incurred the fund’s actual expenses paid during the period. In addition, SEC also proposed that mutual funds be required to disclose the cost in dollars, based on the fund’s actual expenses, of a $10,000 investment that earned a standardized return of 5 percent. If these disclosures become mandatory, investors will have additional information that could be directly compared across funds. By placing the disclosures in funds’ annual and semiannual reports, SEC staff also indicated that it will facilitate prospective investors comparing funds’ expenses before making a purchase decision. However, SEC’s proposal would not require mutual funds to disclose to each investor the specific amount of fees in dollars that are paid on the shares they own. As result, investors will not receive information on the costs of mutual fund investing in the same way they see the costs of many other financial products and services that they may use. In addition, SEC did not propose that mutual funds provide information relating to fees in the quarterly or even more frequent account statements that provide investors with the number and value of their mutual fund shares. In a 1997 survey of how investors obtain information about their funds, the Investment Company Institute (ICI) indicated that, to shareholders, the account statement is probably the most important communication that they receive from a mutual fund company and that nearly all shareholders use such statements to monitor their mutual funds. SEC and industry participants have indicated that the total cost of providing specific dollar fee disclosures might be significant; however, we found that the cost might not represent a large outlay on a per investor basis. As we reported in our March 2003 statement for the record to the Subcommittee on Capital Markets, Insurance, and Government Sponsored Enterprises, House Committee on Financial Services, ICI commissioned a large accounting firm to survey mutual fund companies about the costs of producing such disclosures. Receiving responses from broker-dealers, mutual fund service providers, and fund companies representing approximately 77 percent of total industry assets as of June 30, 2000, this study estimated that the aggregated estimated costs for the survey respondents to implement specific dollar disclosures in shareholder account statements would exceed $200 million, and the annual costs of compliance would be about $66 million. Although the ICI study included information from some broker-dealers and fund service providers, it did not include the reportedly significant costs that all broker-dealers and other third-party financial institutions that maintain accounts on behalf of individual mutual fund shareholders could incur. However, using available information on mutual fund assets and accounts from ICI and spreading such costs across all investor accounts indicates that the additional expenses to any one investor are minimal. Specifically, at the end of 2001, ICI reported that mutual fund assets totaled $6.975 trillion. If mutual fund companies charged, for example, the entire $266 million cost of implementing the disclosures to investors in the first year, then dividing this additional cost by the total assets outstanding at the end of 2001 would increase the average fee by 0.0038 percent or about one-third of a basis point. In addition, ICI reported that the $6.975 trillion in total assets was held in over 248 million mutual fund accounts, equating to an average account of just over $28,000. Therefore, implementing these disclosures would add $1.07 to the average $184 that these accounts would pay in total operating expense fees each year—an increase of six-tenths of a percent. In addition, other less costly alternatives are also available that could increase investor awareness of the fees they are paying on their mutual funds by providing them with information on the fees they pay in the quarterly statements that provide information on an investor’s share balance and account value. For example, one alternative that would not likely be overly expensive would be to require these quarterly statements to present the information—the dollar amount of a fund’s fees based on a set investment amount—that SEC has proposed be added to mutual fund semiannual reports. Doing so would place this additional fee disclosure in the document generally considered to be of the most interest to investors. An even less costly alternative could be to require quarterly statements to also include a notice that reminds investors that they pay fees and to check their prospectus and with their financial adviser for more information. In September 2003, SEC amended fund advertising rules, which require funds to state in advertisements that investors should consider a fund’s fees before investing and directs investors to consult their funds’ prospectus. However, also including this information in the quarterly statement could increase investor awareness of the impact that fees have on their mutual fund’s returns. H.R. 2420 would require that funds disclose in the quarterly statement or other appropriate shareholder report an estimated amount of the fees an investor would have to pay on each investment of $1,000. S. 1958, like H.R. 2420, would require disclosure of fees paid on each $1,000 invested. S. 1971, among other disclosures, would require that funds disclose the actual cost borne by each shareholder for the operating expenses of the fund. SEC’s current proposal, while offering some advantages, does not make mutual funds comparable to other products and provide information in the document that is most relevant to investors—the quarterly account statement. Our report recommends that SEC consider requiring additional disclosures relating to fees be made to investors in the account statement. In addition to providing specific dollar disclosures, we also noted that investors could be provided with a variety of other disclosures about the fees they pay on mutual funds that would have a range of implementation costs, including some that would be less costly than providing specific dollar disclosures. However, seeing the specific dollar amount paid on shares owned could be the incentive that some investors need to take action to compare their fund’s expenses to those of other funds and make more informed investment decisions on this basis. Such disclosures may also increasingly motivate fund companies to respond competitively by lowering fees. Because the disclosures that SEC is currently proposing be included in mutual fund annual and semiannual reports could also prove beneficial, it could choose to require disclosures in these documents and the account statements, which would provide both prospective and existing investors in mutual funds access to valuable information about the costs of investing in funds. Academics and other industry observers have also called for increased disclosure of mutual fund brokerage commissions and other trading costs that are not currently included in fund expense ratios. In an academic study we reviewed that looked at brokerage commission costs, the authors urged that investors pay increased attention to such costs. For example, the study noted that investors seeking to choose their funds on the basis of expenses should also consider reviewing trading costs as relevant information because the impact of these unobservable trading costs is comparable to the more observable expense ratio. The authors of another study noted that research shows that all expenses can reduce returns so attention should be paid to fund trading costs, including brokerage commissions, and that these costs should not be relegated to being disclosed only in mutual funds’ Statement of Additional Information. Mutual fund officials raised various concerns about expanding the disclosure of brokerage commissions and trading costs in general. Some officials said that requiring funds to present additional information about brokerage commissions by including such costs in the fund’s operating expense ratios would not present information to investors that could be easily compared across funds. For example, funds that invest in securities on the New York Stock Exchange (NYSE), for which commissions are usually paid, would pay more in total commissions than would funds that invest primarily in securities listed on NASDAQ because the broker- dealers offering such securities are usually compensated by spreads rather than explicit commissions. Similarly, most bond fund transactions are subject to markups rather than explicit commissions. If funds were required to disclose the costs of trades that involve spreads, officials noted that such amounts would be subject to estimation errors. Officials at one fund company told us that it would be difficult for fund companies to produce a percentage figure for other trading costs outside of commissions because no agreed upon methodology for quantifying market impact costs, spreads, and markup costs exists within the industry. Other industry participants told us that due to the complexity of calculating such figures, trading cost disclosure is likely to confuse investors. For example funds that attempt to mimic the performance of certain stock indexes, such as the Standard & Poors 500 stock index, and thus limit their investments to just these securities have lower brokerage commissions because they trade less. In contrast, other funds may employ a strategy that requires them to trade frequently and thus would have higher brokerage commissions. However, choosing among these funds on the basis of their relative trading costs may not be the best approach for an investor because of the differences in these two types of strategies. To improve the disclosure of trading costs to investors, the House-passed H.R. 2420 would require mutual fund companies to make more prominent their portfolio turnover disclosure which, by measuring the extent to which the assets in a fund are bought and sold, provides an indirect measure of transaction costs for a fund. The bill directs funds to include this disclosure in a document that is more widely read than the prospectus or Statement of Additional Information, and would require fund companies to provide a description of the effect of high portfolio turnover rates on fund expenses and performance. H.R 2420 also requires SEC to issue a concept release examining the issue of portfolio transaction costs. S. 1822 would require mutual funds to disclose brokerage commissions as part of fund expenses. S. 1958 would require SEC to issue a concept release on disclosure of portfolio transaction costs. S. 1971 would require funds to disclose the estimated expenses paid for costs associated with management of the fund that reduces the funds overall value, including brokerage commissions, revenue sharing and directed brokerage arrangements, transactions costs and other fees. In December 2003, SEC issued a concept release to solicit views on how SEC could improve the information that mutual funds disclose about their portfolio transaction costs. The way that investors pay for the advice of financial professionals about their mutual funds has evolved over time. Approximately 80 percent of mutual fund purchases are made through broker-dealers or other financial professionals, such as financial planners and pension plan administrators. Previously, the compensation that these financial professionals received for assisting investors with mutual fund purchases were paid by either charging investors a sales charge or load or by paying for such expenses out of the investment adviser’s own profits. However, in 1980, SEC adopted rule 12b-1 under the Investment Company Act to help funds counter a period of net redemptions by allowing them to use fund assets to pay the expenses associated with the distribution of fund shares. Rule 12b-1 plans were envisioned as temporary measures to be used during periods of declining assets. Any activity that is primarily intended to result in the sale of mutual fund shares must be included as a 12b-1 expense and can include advertising; compensation of underwriters, dealers, and sales personnel; printing and mailing prospectuses to persons other than current shareholders; and printing and mailing sales literature. These fees are called 12b-1 fees after the rule that allows fund assets to be used to pay for fund marketing and distribution expenses. NASD, whose rules govern the distribution of fund shares by broker dealers, limits the annual rate at which 12b-1 fees may be paid to broker- dealers to no more than 0.75 percent of a fund’s average net assets per year. Funds are allowed to include an additional service fee of up to 0.25 percent of average net assets each year to compensate sales professionals for providing ongoing services to investors or for maintaining their accounts. Therefore, 12b-1 fees included in a fund’s total expense ratio are limited to a maximum of 1 percent per year. Rule 12b-1 provides investors an alternative way of paying for investment advice and purchases of fund shares. Apart from 12b-1 fees, brokers can be paid with sales charges called “loads”; “front-end” loads are applied when shares in a fund are purchased and “back-end” loads when shares are redeemed. With a 12b-1 plan, the fund can finance the broker’s compensation with installments deducted from fund assets over a period of several years. Thus, 12b-1 plans allow investors to consider the time-related objectives of their investment and possibly earn returns on the full amount of the money they have to invest, rather than have a portion of their investment immediately deducted to pay their broker. Rule 12b-1 has also made it possible for fund companies to market fund shares through a variety of share classes designed to help meet the different objectives of investors. For example, Class A shares might charge front-end loads to compensate brokers and may offer discounts called breakpoints for larger purchases of fund shares. Class B shares, alternatively, might not have front-end loads, but would impose asset- based 12b-1 fees to finance broker compensation over several years. Class B shares also might have deferred back-end loads if shares are redeemed within a certain number of years and might convert to Class A shares if held a certain number of years, such as 7 or 8 years. Class C shares might have a higher 12b-1 fee, but generally would not impose any front-end or back-end loads. While Class A shares might be more attractive to larger, more sophisticated investors who wanted to take advantage of the breakpoints, smaller investors, depending on how long they plan to hold the shares, might prefer Class B or C shares because no sales charges would be deducted from their initial investments. Although providing alternative means for investors to pay for the advice of financial professionals, some concerns exist over the impact of 12b-1 fees on investors’ costs. For example, our June 2003 report discussed academic studies that found that funds with 12b-1 plans had higher management fees and expenses. Questions involving funds with 12b-1 fees have also been raised over whether some investors are paying too much for their funds depending on which share class they purchase. For example, SEC recently brought a case against a major broker dealer that it accused of inappropriately selling mutual fund B shares, which have higher 12b-1 fees, to investors who would have been better off purchasing A shares that had much lower 12b-1 fees. Also, in March 2003, NASD, NYSE, and SEC staff reported on the results of jointly administered examinations of 43 registered broker-dealers that sell mutual funds with a front-end load. The examinations found that most of the brokerage firms examined, in some instances, did not provide customers with breakpoint discounts for which they appeared to have been eligible. One mutual fund distribution practice—called revenue sharing—that has become increasingly common raises potential conflicts of interest between broker-dealers and their mutual fund investor customers. Broker-dealers, whose extensive distribution networks and large staffs of financial professionals who work directly with and make investment recommendations to investors, have increasingly required mutual funds to make additional payments to compensate their firms beyond the sales loads and 12b-1 fees. These payments, called revenue sharing payments, come from the adviser’s profits and may supplement distribution-related payments from fund assets. According to an article in one trade journal, revenue sharing payments made by major fund companies to broker- dealers may total as much as $2 billion per year. According to the officials of a mutual fund research organization, about 80 percent of fund companies that partner with major broker-dealers make cash revenue sharing payments. For example, some broker-dealers have narrowed their offerings of funds or created preferred lists that include the funds of just six or seven fund companies that then become the funds that receive the most marketing by these broker-dealers. In order to be selected as one of the preferred fund families on these lists, the mutual fund adviser often is required to compensate the broker-dealer firms with revenue sharing payments. One of the concerns raised about revenue sharing payments is the effect on overall fund expenses. A 2001 research organization report on fund distribution practices noted that the extent to which revenue sharing might affect other fees that funds charge, such as 12b-1 fees or management fees, was uncertain. For example, the report noted that it was not clear whether the increase in revenue sharing payments increased any fund’s fees, but also noted that by reducing fund adviser profits, revenue sharing would likely prevent advisers from lowering their fees. In addition, fund directors normally would not question revenue sharing arrangements paid from the adviser’s profits. In the course of reviewing advisory contracts, fund directors consider the adviser’s profits not taking into account marketing and distribution expenses, which also could prevent advisers from shifting these costs to the fund. Revenue sharing payments may also create conflicts of interest between broker-dealers and their customers. By receiving compensation to emphasize the marketing of particular funds, broker-dealers and their sales representatives may have incentives to offer funds for reasons other than the needs of the investor. For example, revenue sharing arrangements might unduly focus the attention of broker-dealers on particular mutual funds, reducing the number of funds considered as part of an investment decision−potentially leading to inferior investment choices and potentially reducing fee competition among funds. Finally, concerns have been raised that revenue sharing arrangements might conflict with securities self- regulatory organization rules requiring that brokers recommend purchasing a security only after ensuring that the investment is suitable given the investor’s financial situation and risk profile. Although revenue sharing payments can create conflicts of interest between broker-dealers and their clients, the extent to which broker- dealers disclose to their clients that their firms receive such payments from fund advisers is not clear. Rule 10b-10 under the Securities Exchange Act of 1934 requires, among other things, that broker-dealers provide customers with information about third-party compensation that broker- dealers receive in connection with securities transactions. While broker- dealers generally satisfy the 10b-10 requirements by providing customers with written “confirmations,” the rule does not specifically require broker- dealers to provide the required information about third-party compensation related to mutual fund purchases in any particular document. SEC staff told us that they interpret rule 10b-10 to permit broker-dealers to disclose third-party compensation related to mutual fund purchases through delivery of a fund prospectus that discusses the compensation. However, investors would not receive a confirmation and might not view a prospectus until after purchasing mutual fund shares. As a result of these concerns, our report recommends that SEC evaluate ways to provide more information to investors about the revenue sharing payments that funds make to broker-dealers. Having additional disclosures made at the time that fund shares are recommended about the compensation that a broker-dealer receives from fund companies could provide investors with more complete information to consider when making their investment decision. To address revenue sharing issues, we were pleased to see that a recent NASD rule proposal would require broker-dealers to disclose in writing when the customer first opens an account or purchases mutual fund shares compensation that they receive from fund companies for providing their funds “shelf space” or preference over other funds. On January 14, 2004, SEC proposed new rules and rule amendments designed to enhance the information that broker-dealers provide to their customers. H.R. 2420 would require fund directors to review revenue sharing arrangements consistent with their fiduciary duty to the fund. H.R. 2420 also would require funds to disclose revenue sharing arrangements and require brokers to disclose whether they have received any financial incentives to sell a particular fund or class of shares. S. 1822 would require brokers to disclose in writing any compensation received in connection with a customer’s purchase of mutual fund shares. S. 1971 would require fund companies and investment advisers to fully disclose certain sales practices, including revenue sharing and directed brokerage arrangements, shareholder eligibility for breakpoint discounts, and the value of research and other services paid for as part of brokerage commissions. Soft dollar arrangements allow fund investment advisers to obtain research and brokerage services that could potentially benefit fund investors but could also increase investors’ costs. When investment advisers buy or sell securities for a fund, they may have to pay the broker- dealers that execute these trades a commission using fund assets. In return for these brokerage commissions, many broker-dealers provide advisers with a bundle of services, including trade execution, access to analysts and traders, and research products. Some industry participants argue that the use of soft dollars benefits investors in various ways. The research that the fund adviser obtains can directly benefit a fund’s investors if the adviser uses it to select securities for purchase or sale by the fund. The prevalence of soft dollar arrangements also allows specialized, independent research to flourish, thereby providing money managers a wider choice of investment ideas. As a result, this research could contribute to better fund performance. The proliferation of research available as a result of soft dollars might also have other benefits. For example, an investment adviser official told us that the research on smaller companies helps create a more efficient market for such companies’ securities, resulting in greater market liquidity and lower spreads, which would benefit all investors including those in mutual funds. Although the research and brokerage services that fund advisers obtain through the use of soft dollars could benefit a mutual fund investor, this practice also could increase investors’ costs and create potential conflicts of interest that could harm fund investors. For example, soft dollars could cause investors to pay higher brokerage commissions than they otherwise would, because advisers might choose broker-dealers on the basis of soft dollar products and services, not trade execution quality. One academic study shows that trades executed by broker-dealers that specialize in providing soft dollar products and services tend to be more expensive than those executed through other broker-dealers, including full-service broker- dealers. Soft dollar arrangements could also encourage advisers to trade more in order to pay for more soft dollar products and services. Overtrading would cause investors to pay more in brokerage commissions than they otherwise would. These arrangements might also tempt advisers to “over-consume” research because they are not paying for it directly. In turn, advisers might have less incentive to negotiate lower commissions, resulting in investors paying more for trades. Under the Investment Advisers Act of 1940, advisers must disclose details of their soft dollar arrangements in Part II of Form ADV, which investment advisers use to register with SEC and must send to their advisory clients. However, this form is not provided to the shareholders of a mutual fund, although the information about the soft dollar practices that the adviser uses for particular funds are required to be included in the Statement of Additional Information that funds prepare, which is available to investors upon request. Specifically, Form ADV requires advisers to describe the factors considered in selecting brokers and determining the reasonableness of their commissions. If the value of the products, research, and services given to the adviser affects the choice of brokers or the brokerage commission paid, the adviser must also describe the products, research and services and whether clients might pay commissions higher than those obtainable from other brokers in return for those products. In a series of regulatory examinations performed in 1998, SEC staff found examples of problems relating to investment advisers’ use of soft dollars, although far fewer problems were attributable to mutual fund advisers. In response, SEC staff issued a report that included proposals to address the potential conflicts created by these arrangements, including recommending that investment advisers keep better records and disclose more information about their use of soft dollars. Although the recommendations could increase the transparency of these arrangements and help fund directors and investors better evaluate advisers’ use of soft dollars, SEC has yet to take action on some of its proposed recommendations. As a result, our June 2003 report recommends that SEC evaluate ways to provide additional information to fund directors and investors on their fund advisers’ use of soft dollars. SEC relies on disclosure of information as a primary means of addressing potential conflicts between investors and financial professionals. However, because SEC has not acted to more fully address soft dollar-related concerns, investors and mutual fund directors have less complete and transparent information with which to evaluate the benefits and potential disadvantages of their fund adviser’s use of soft dollars. To address the inherent conflicts of interest with respect to soft dollar arrangements, H.R. 2420 would require SEC to issue rules mandating disclosure of information about soft dollar arrangements; require fund advisers to submit to the fund’s board of directors an annual report on these arrangements, and require the fund to provide shareholders with a summary of that report in its annual report to shareholders; impose a fiduciary duty on the fund’s board of directors to review soft dollar arrangements; direct SEC to issue rules to require enhanced recordkeeping of soft require SEC to conduct a study of soft-dollar arrangements, including the trends in the average amounts of soft dollar commissions, the types of services provided through these arrangements, the benefits and disadvantages of the use of soft dollar arrangements, the impact of soft dollar arrangements on investors’ ability to compare the expenses of mutual funds, the conflicts of interest created by these arrangements and the effectiveness of the board of directors in managing such conflicts, and the transparency of soft dollar arrangements. S. 1822 would discourage use of soft dollars by requiring that funds calculate their value and disclose it along with other fund expenses. S. 1971 also would require disclosure of soft dollar arrangements and the value of the services provided. Also, it would require that SEC conduct a study of the use of soft dollar arrangements by investment advisers. Since we issued our report in June 2003, various allegations of misconduct and abusive practices involving mutual funds have come to light. In early September 2003, the Attorney General of the State of New York filed charges against a hedge fund manager for arranging with several mutual fund companies to improperly trade in fund shares and profiting at the expense of other fund shareholders. Since then federal and state authorities’ widening investigation of illegal late trading and improper timing of fund trades has involved a growing number of prominent mutual fund companies and brokerage firms. The problems involving late trading arise when some investors are able to purchase or sell mutual fund shares after the 4:00 pm Eastern Time close of U.S. securities markets, the time at which funds price their shares. Under current mutual fund regulations, orders for mutual fund shares received after 4:00 pm are required by regulation to be priced at the next day’s price. An investor permitted to engage in late trading could be buying or selling shares at the 4:00 pm price knowing of developments in the financial markets that occurred after 4:00 pm, thus unfairly taking advantage of opportunities not available to other fund shareholders. Clearly, to ensure compliance with the law, funds should have effective internal controls in place to prevent abusive late trading. Regulators are considering a rule change requiring that an order to purchase or redeem fund shares be received by the fund, its designated transfer agent, or a registered securities clearing agency, by the time that the fund establishes for calculating its net asset value in order to receive that day’s price. The problems involving market timing occur when certain fund investors are able to take advantage of temporary disparities between the share value of a fund and the values of the underlying assets in the fund’s portfolio. For example, such disparities can arise when U.S. mutual funds use old prices for their foreign assets even though events have occurred overseas that will likely cause significant movements in the prices of those assets when their home markets open. Market timing, although not illegal, can be unfair to funds’ long-term investors because it provides the opportunity for selected fund investors to profit from fund assets at the expense of fund long-term investors. To address these issues, regulators are considering the merits of various proposals that have been put forth to discourage market timing, such as mandatory redemption fees or fair value pricing of fund shares. To protect fund investors from such unfair trading practices H.R. 2420 would, with limited exceptions, require that all trades be placed with funds by 4:00 pm and includes provisions to eliminate conflicts of interest in portfolio management, ban short-term trading by insiders, allow higher redemption fees to discourage short-term trading, and encourage wider use of fair value pricing to eliminate stale prices that makes market timing profitable. S. 1958 would require that fund companies receive orders prior to the time they price their shares. S. 1958 would also increase penalties for late trading and require funds to explicitly disclose their market timing policies and procedures. S.1971 also would restrict the placing of trades after hours, require funds to have internal controls in place and compliance programs to prevent abusive trading, and require wider use of fair value pricing. In conclusion, GAO believes that various changes to current disclosures and other practices would benefit fund investors. Additional disclosures of mutual fund fees could help increase the awareness of investors of the fees they pay and encourage greater competition among funds on the basis of these fees. Likewise, better disclosure of the costs funds incur to distribute their shares and of the costs and benefits of funds’ use of soft dollar research activities could provide investors with more complete information to consider when making their investment decision. In light of recent scandals involving late trading and market timing, various reforms to mutual fund rules will also likely be necessary to better protect the interests of all mutual fund investors. This concludes my prepared statement and I would be happy to respond to questions. For further information regarding this testimony, please contact Cody J. Goebel at (202) 512-8678. Individuals making key contributions to this testimony include Toayoa Aldridge and David Tarosky. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Concerns have been raised over whether the disclosures of mutual fund fees and other fund practices are sufficiently fair and transparent to investors. Our June 2003 report, Mutual Funds: Greater Transparency Needed in Disclosures to Investors, GAO-03- 763, reviewed (1) how mutual funds disclose their fees and related trading costs and options for improving these disclosures, (2) changes in how mutual funds pay for the sale of fund shares and how the changes in these practices are affecting investors, and (3) the benefits of and the concerns over mutual funds' use of soft dollars. This testimony summarizes the results of our report and discusses certain events that have occurred since it was issued. Although mutual funds disclose considerable information about their costs to investors, the amount of fees and expenses that each investor specifically pays on their mutual fund shares are currently disclosed as percentages of fund assets, whereas most other financial services disclose the actual costs to the purchaser in dollar terms. SEC staff has proposed requiring funds to disclose additional information that could be used to compare fees across funds. However, SEC is not proposing that funds disclose the specific dollar amount of fees paid by each investor nor is it proposing to require that any fee disclosures be made in the account statements that investors receive. Although some of these additional disclosures could be costly and data on their benefits to investors was not generally available, less costly alternatives exist that could increase the transparency and investor awareness of mutual funds fees, making consideration of additional fee disclosures worthwhile. Changes in how mutual funds pay intermediaries to sell fund shares have benefited investors but have also raised concerns. Since 1980, mutual funds, under SEC Rule 12b-1, have been allowed to use fund assets to pay for certain marketing expenses. Over time the use of these fees has evolved to provide investors greater flexibility in choosing how to pay for the services of individual financial professionals that advise them on fund purchases. Another increasingly common marketing practice called revenue sharing involves fund investment advisers making additional payments to the broker-dealers that distribute their funds' shares. However, these payments may cause the broker-dealers receiving them to limit the fund choices they offer to investors and conflict with their obligation to recommend the most suitable funds. Regulators acknowledged that the current disclosure regulations might not always result in complete information about these payments being disclosed to investors. Under soft dollar arrangements, mutual fund investment advisers use part of the brokerage commissions they pay to broker-dealers for executing trades to obtain research and other services. Although industry participants said that soft dollars allow fund advisers access to a wider range of research than may otherwise be available and provide other benefits, these arrangements also can create incentives for investment advisers to trade excessively to obtain more soft dollar services, thereby increasing fund shareholders' costs. SEC staff has recommended various changes that would increase transparency by expanding advisers' disclosure of their use of soft dollars. By acting on the staff's recommendations SEC would provide fund investors and directors with needed information about how their funds' advisers are using soft dollars.
The Postal Service, the nation’s largest civilian employer, had about 765,000 career employees at the end of fiscal year 1997. Service employees include craft employees, the largest group; EAS; the Postal Career Executive Service (PCES); and others, such as inspectors for the Postal Inspection Service. The Service structure includes headquarters, 11 areas, and 85 performance clusters, with cluster-level employees making up about 96 percent of the Service workforce. For the purposes of this review, we focused on the cluster-level EAS workforce. The EAS workforce consists primarily of employees in EAS 11 through 26 positions. EAS management-level positions begin at EAS 16 and include such positions as postmaster, manager of customer services, and manager of postal operations. At the end of fiscal year 1997, EAS positions totaled 80,238, or about 10 percent of total Service career-level employees. PCES, established in 1979, includes Service senior-level officers and executives in positions such as area vice presidents. At the end of fiscal year 1997, the Service had about 900 employees in PCES positions. We did not include employees in PCES positions in our analyses for this report. According to the Service, one of its corporate goals is a commitment to employees, which includes an effort to provide equal employment opportunities to all employees, take advantage of its diverse workforce, and compete effectively in the communications marketplace. To that end, the Service created its Diversity Development Department in headquarters in 1992, which was to foster an all-inclusive business environment. The head of the Department reports directly to the Deputy Postmaster General. The Department is responsible for, among other things, actively supporting the recruitment, retention, and upward mobility of women and minorities. In addition, the Service’s 1999 Annual Performance Plan includes achieving a diverse workforce as one of its goals. To determine the effectiveness of the Service’s diversity development program, the Postal Service Board of Governors commissioned Aguirre International, a contractor, to undertake a 6-month study (May 2, 1997, to Nov. 2, 1997) of workforce diversity at the Postal Service. The study addressed Service personnel and supplier diversity and was issued in October 1997. The report stated that the Service was a leader in meeting affirmative action goals as well as striving for parity between its workforce and the CLF. It also stated, among other things, that women and minorities appeared to be experiencing problems advancing to management jobs at EAS 17 and above positions. The Board of Governors subsequently directed the Service to develop an action plan for dealing with the diversity issues raised by Aguirre. The Service developed an action plan and briefed the Board on the plan in April 1998. In our previous letter, we reviewed promotions to EAS 16 and above positions at four selected performance clusters. Documentation in the promotion files and our discussions with Service officials provided evidence that the Service’s required promotion procedures we reviewed were followed for the 127 fiscal year 1997 promotions at these 4 sites. In addition, for 117 of these promotions, we provided statistical data on the distribution of the specific EEO groups throughout the promotion process stages—applications received, applicants considered best qualified, and applicants promoted. The specific EEO groups discussed in this report include white, black, Hispanic, Asian, and Native American men and women. We did our work from July 1998 through January 1999 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Postmaster General and from Aguirre International’s Director of Operations. The Postal Service’s oral comments and Aguirre’s written comments are discussed near the end of this letter. Further details about the scope and methodology of our review can be found in appendix I. The analyses that follow show how the representation of cluster-level women and minority groups (1) compared with their representation in the 1990 CLF; (2) changed between fiscal years 1993 and 1997 in EAS 17 and above positions; (3) among those promoted to EAS 17 and above positions in fiscal year 1997, compared with their representation in EAS 17 and above positions in fiscal year 1997 (before the promotions); and (4) in EAS 17 and above positions, compared with their representation in EAS 11 through 16 positions in fiscal year 1997. We also made similar comparisons for women and minorities involving the remainder of the Postal Service workforce located at the headquarters and area office levels, as detailed in appendix II. Table 1 shows that when we compared fiscal year 1997 data for the Service’s cluster-level workforce with CLF data from the 1990 decennial census, black and Asian men and women and Hispanic men were fully represented, while Hispanic women, Native American men and women, and white women were underrepresented. Specifically, black men and women comprised 11.3 and 9.6 percent, respectively, of the cluster workforce compared with their respective 5.0 and 5.5 percent representation in the CLF; Asian men and women comprised 3.5 and 1.9 percent, respectively, of the workforce compared with their respective 1.5 and 1.3 percent representation in the CLF. However, white and Hispanic women were underrepresented, comprising 22.1 percent and 2.0 percent, respectively, of the workforce compared with their respective 35.3 percent and 3.4 percent CLF representation. White men were represented in the workforce similarly to their level of representation in the CLF. In addition to the cluster-level workforce data presented in table 1, we analyzed similar data for the Service’s headquarters-level and area office- level workforces. Table II.1 in appendix II shows that white and Hispanic women and Native American men were underrepresented among the three workforce levels. Native American women were underrepresented among cluster employees and headquarters employees, but not among area office employees. Hispanic men were underrepresented among headquarters and area office employees, while white men were underrepresented among area office employees. Black and Asian men and women were fully represented in all three workforce levels. Figure 1 shows our analysis of the representation of women and minorities at the cluster level in EAS 17 and above positions in fiscal year 1993 compared with fiscal year 1997. As the figure shows, generally, the representation of women and minorities increased over this period; black men’s representation decreased 0.6 percent over this period. Also, white men’s representation decreased over this period by about 2.0 percent. Table II.2 in appendix II shows this same type of comparison between the 2 fiscal years for women and minorities in EAS 17 and above positions at the headquarters and area office levels. At the headquarters level, in addition to the slight decrease in representation of black and white men as happened at the cluster level, representation of Native American men also showed a slight decrease. At the area office level, the representation of black men, Asian men, and Native American men all generally decreased. Also, at the headquarters and area office levels, the representation of white men decreased. As shown in figure 2, we compared the representation of each EEO group at the cluster level promoted to EAS 17 and above positions in fiscal year 1997 with their representation in EAS 17 and above positions at the cluster level in fiscal year 1997 before the promotions. Our analysis showed that the representation of women and all minority groups among those promoted was higher than the representation of women and minority groups in EAS 17 and above positions, with the exception of Asian women. Also, the representation of white males in promotions to these higher EAS positions was lower than their representation in the cluster-level workforce. Table II.3 in appendix II shows the same type of information for the same period for the headquarters and area office workforce levels. At the headquarters level, representation of women and all minority groups among those promoted was higher than their representation in EAS 17 and above positions, with the exception of Asian women and black and Native American men. However, at the area office level, representation of white women; Hispanic men and women; and Native American men and women was lower than their representation in EAS 17 and above positions. Also, white men were promoted at a rate lower than their representation at the headquarters and area office levels. Table 2 shows our last comparison, the fiscal year 1997 representation of women and minorities in EAS 17 and above positions with their representation in EAS 11 through 16 positions. We made this comparison because employees in EAS 11 through 16 positions represent the workforce pool from which selections for promotion to EAS 17 and above positions would likely be made. Our analyses in table 2 show that among cluster-level employees, the overall representation of women and minorities in EAS 17 and above positions was lower than it was in EAS 11 through 16 positions in fiscal year 1997—42 percent compared to 61 percent. Table II.4 in appendix II shows variation in the representation of women and minorities in the higher EAS positions at the headquarters and area office levels compared with their representation in EAS 11 through 16 positions. Based on our own standards for designing studies and developing methodologies to evaluate programs, we believe that the methodologies used by Aguirre International were generally reasonable, appropriate, and relevant given the established study parameters, including the 6-month time frame in which the study was to be completed and the complexities associated with addressing the sensitive issue of diversity in an organization as large as the Postal Service. In addition, limitations resulting from the study’s parameters, as well as cautions regarding the study’s findings, were noted throughout the report. However, in our review of the Aguirre report, we noted one area of concern: The report stated that it appeared that a glass ceiling impeded the progression of women and minorities to EAS 17 and above positions, but in our opinion, the report did not explicitly define the term glass ceiling or present convincing supporting evidence. At the direction of the Postal Service Board of Governors, the Service contracted with Aguirre International to study the Service’s diversity program. The Board was specifically interested in the Service’s progress in meeting its goal of creating a Service workforce as diverse as the CLF. The Board asked Aguirre to look at several areas, including hiring, promoting, training and development, and contracting. Aguirre was to complete the study within a 6-month period—May 2, 1997, through November 2, 1997. The Aguirre report stated that the study was designed to assess the effectiveness of the Service’s diversity program in eight research areas, which are listed in appendix III of this report. The approach to the study taken by Aguirre researchers involved the use of multiple research methods to research the eight questions (see app. III). Aguirre’s report indicated that it had performed numerous data analyses, reviewed written policies and practices, validated a Service database, visited 10 postal sites, and conducted a survey and interviews. Such an approach allowed the issues presented in the report to be discussed from several perspectives, which in our opinion and based on our standards for performing studies and evaluations, was an acceptable methodological approach. For example, Aguirre made what we believe were appropriate adjustments to the 1990 Census CLF data to arrive at compatible postal districts for comparisons. Aguirre staff developed models and adjusted the models to allow for Service hiring requirements and restrictions, such as English language proficiency and veteran’s preference. Using these data, they made numerous comparisons of the Postal workforce to the CLF. In addition, the report indicated that Aguirre staff gathered data from various organizational levels in the Service. It indicated that the staff spoke with Service officials at headquarters and selected sites, a number of Service employees, potential Service employees, and contractors to obtain their perspectives on diversity-related issues in the Service. Aguirre staff also visited selected Service sites and conducted employee surveys and interviews. They arranged focus group discussions with community residents who were viewed as potential employees to gather information about, among other things, their views on barriers to diversity at the Service. They also held focus groups with and interviewed potential contractors to explore the extent to which any known barriers might impede contractors, especially minority-owned contractors, from obtaining Service business. In addition, the Aguirre report referred to organizations with success in the area of diversity and used internal benchmarking to report “promising practices” within the Service. Certain study parameters set by the Board of Governors, such as the time frame for the study and the preselection of certain sites, resulted in numerous study limitations. The Aguirre report clearly noted these limitations in appropriate sections, citing appropriate cautions for readers regarding the study’s findings. According to the Aguirre Project Director, the 6-month period for the study that was set by the Board of Governors affected the manner in which the study was implemented in a number of ways. She said Aguirre wanted to further analyze the data but ran out of time. She also said that interviews and discussions with Service employees, potential employees, and potential contractors were limited in that Aguirre staff spoke only with individuals located near the sites they visited. Thus, the views of these individuals may not represent the views of similar individuals at other Service sites. Finally, the Aguirre report recognizes the information obtained from Aguirre’s visits to postal sites may not be typical of Service sites throughout the country. The Board selected the first 5 of the 10 sites visited because these sites had known diversity problems or were of special interest to particular Board members. This resulted in a highly urban sample of sites. Aguirre attempted to balance these sites by selecting five others based on demographics that were more rural and, according to Aguirre and Service officials, that had achieved some success in the area of diversity. However, even this larger sample of 10 sites had African-American representation that was twice that of the other 75 performance clusters that were not selected for review. Indeed, the report cautioned readers that the views of individuals at these sites could not be generalized to the Service as a whole. As a result, the findings from the site visits may be more indicative of specific sites selected rather than the status of the Service overall. Aguirre stated in its report that it appeared that a glass ceiling existed at positions beginning at EAS 17 for women and minorities. Aguirre did not explicitly define the term glass ceiling. Further, Aguirre officials told us that Aguirre based its finding of the glass ceiling primarily on its analyses of fiscal year 1996 data and comparisons of that data with the CLF and secondarily on discussions it had with Service employees. Specifically, Aguirre compared the level of women and minority representation at the various levels or positions within the EAS with their representation in the CLF. Because the representation of women and minorities in positions beginning at EAS 17 was less than their representation in the CLF, Aguirre stated that it appeared that a glass ceiling began at EAS 17 positions. In addition, the Project Leader for the Aguirre study told us that although Aguirre’s finding of a glass ceiling was supported primarily by its analyses and comparisons of data, the finding was also supported by the views of postal workers, many of whom perceived that barriers existed to the promotion of women and minorities to higher EAS and PCES positions. She said that the views of the Service employees Aguirre interviewed were consistent—that is, barriers, such as a perceived “old boy network,” prevented women and minorities from progressing to EAS 17 and above positions. However, she acknowledged, as did the Aguirre report, that the views expressed by these individuals at these sites could not be generalized to the entire Service workforce. We do not believe that it is appropriate to compare the EEO group representation in specific EAS positions or levels in the Service with the CLF because CLF data are not, nor were they intended to be, broken down into an appropriate pool of employees for such a comparison (i.e., similar positions or levels, as well as individuals with appropriate qualifications for those positions). Both the Aguirre Project Director and Project Leader for the study told us that Aguirre used the comparison with the CLF because the Service asked them to. Nevertheless, the Service also disagreed with Aguirre’s glass-ceiling finding on the basis of its comparison of women and minorities in specific EAS positions with the general CLF. Further, we believe that the use of the term glass ceiling in the Aguirre report could be misleading, particularly if the term were to be interpreted by readers in a general sense—that is, an upper limit beyond which few or no women and minorities could pass. Under this definition, and according to our review of workforce and promotion data for EAS 17 and above cluster-level employees in fiscal year 1997, no glass ceiling existed. For example, as shown in table 3, we found that for the cluster level, women and minorities were present in all positions and had been promoted to most of those positions. In addition, the percentage of women and minorities being promoted into these higher EAS positions was generally greater than was their representation in the same positions in fiscal year 1997 (before the promotions). For example, for EAS 17 positions, women and minorities comprised about 54 percent of the positions and received about 58 percent of the promotions. However, both our analyses and Aguirre’s suggest that opportunity may exist for the Service to increase the diversity of its workforce in the higher EAS positions, even though a glass ceiling does not appear to exist. For example, women and minorities were often less represented in the EAS 17 and above positions than they were in the EAS 11 to 16 positions. Service officials stated that the Aguirre report was intended to provide an impression of the overall state of diversity in the Postal Service. In that context, Service officials said that they have accepted the report’s basic message that the Service needs to strengthen its diversity program and have developed and begun implementing a plan to do so. They said that although it was difficult to determine the exact number of recommendations contained in the Aguirre report, they believe the actions they have under way or planned will address the major issues, concerns, and recommendations Aguirre reported. Service officials also said that their initiatives would result in ongoing changes in the way that the Service incorporates diversity into its operations. The Service developed 23 initiatives designed to improve its diversity program and address what it believed to be the Aguirre report’s major issues, concerns, and recommendations. As of December 1998, the Service reported that it had completed implementation of nine of the initiatives and was on schedule for completing the remaining initiatives, with the exception of two initiatives for which completion would be delayed. We did not verify the accuracy of the Service’s estimate of the completion status of initiatives in process nor did we evaluate whether any of the initiatives would resolve the concerns raised by Aguirre. When Service officials reported that a new policy or process had been established to partially or fully address 1 of its 23 initiatives, we obtained available documentation confirming the new policy or process. The Service organized its 23 diversity initiatives into 6 functional groups. Table 4 shows these six groups, the specific initiatives established within each group, Service estimates of the status of its efforts to implement the initiatives, and target completion dates for implementing the initiatives. The projected completion dates shown in the table are those initially established by the Service. As of December 1998, the Service reported that it was progressing in its implementation of the 23 initiatives. The Service reported that nine initiatives had been completed, and seven were 90 to 99 percent complete. Of the remaining inititiatives, three were estimated to be 80 percent complete, and four ranged from 30 percent to 50 percent complete. Service officials said that initiative 22—using supplier diversity data to measure the success of the Supplier Diversity Program—will be partially delayed because of the need to focus resources on resolving the Year 2000 computer system issue. Also, initiative 23—establishing accountability for complying with the Supplier Diversity Program for all Service employees making purchases—will require more time than initially established so that discussions with buyers on issues associated with accountability for supplier diversity can occur. According to Service Diversity Development officials, their statement that initiatives were 100-percent complete indicated that, in some cases, a policy, process, procedure, or plan had been developed and approved but that the relevant actions covered by the policy, process, procedure, or plan were still ongoing. However, for other completed initiatives, no further actions were to be taken. For example, for initiative 1, after a new Diversity Development policy statement was issued, no further actions to implement this initiative were considered necessary. This was also the case for initiatives 2 and 3—revising the Diversity Business Plan and establishing a Diversity Oversight Group. However, for initiatives 4 (evaluating the current Diversity Development Organization and staff and establishing appropriate headquarters and field staffing), 6 (establishing an economic incentive for attaining diversity targets), 16 (expanding Supplier Diversity Program communications), 18 (linking local buying to the commitment for the Supplier Diversity Program), and 20 (making it easier for suppliers to participate more effectively in the postal purchasing process), actions associated with these initiatives were still under way. Likewise, some other initiatives may involve additional action after the Service designates them 100-percent complete. Service Diversity Development officials said that they plan to monitor the implementation of new policies, processes, procedures, or plans covered by the 23 initiatives, at least on a quarterly basis, until they become standard operating procedures. Service officials also told us that they expected the monitoring process to be operational by the spring of 1999 and that, consequently, the scopes, completion dates, and implementation status for some of the initiatives could change. Service officials said that the Board of Governors did not request that they address all of Aguirre’s recommendations. Rather, they were asked to develop initiatives that they believed would help improve diversity at the Service and result in improvements in the way that the Service incorporated diversity in its operations, thereby improving Service diversity overall. They said that they believed their initiatives have addressed Aguirre’s major issues, concerns, and recommendations. Service officials noted that determining the exact number of Aguirre’s recommendations was difficult because recommendations were noted in several locations in the report and many of them appeared to be duplicative. Service officials also noted that it was sometimes unclear as to whether Aguirre’s statements were intended as recommendations or just observations. We also found it difficult to determine with precision the number of specific Aguirre recommendations for the same reasons the Service cited. For example, in chapter 5 of its report, Aguirre stated that the Service may want to do further study of the employees it classifies as American Indian/Alaskan Native since many of the employees in this category consider themselves to be something else. It is not clear whether Aguirre intended this statement to be a recommendation or an action the Service could consider. Also, the Service’s initiative 1 as shown in table 4 was designed to address five different Aguirre recommendations, all of which seemed to be directed at the same concern—developing and issuing a clear corporate policy on diversity. Service officials said that other recommendations by Aguirre called for actions that the Service was already taking or planned to take. For example, Aguirre recommended that the Service define the attrition rate that can be predicted using age and past performance for trainers and EEO experts. The Service said that this information would be available from its New Workforce Planning Model, which was already in the design phase of development. Service officials said that several of Aguirre’s recommendations seemed to be based on inaccuracies or misstatements about current Service policies and procedures. For example, Aguirre reported that the Service usually selects bidders with the lowest price. Aguirre recommended that bidder selection should consider other criteria, such as quality of the processes and products, as well as price. Service officials told us that they did not accept this recommendation because it is already their general policy to make awards based on “best value” not lowest price. Further, Service officials said that for some of Aguirre’s recommendations, they found no basis or rationale and did not plan to implement them at this time. For example, Aguirre recommended that a minimum of 7 percent of the Service’s total contract dollars be awarded to minority suppliers. Service officials said that they did not find any supporting rationale for this recommendation, and they believed that the Service’s current goal of 6 percent of total contract dollars to be awarded to minority businesses by 2002 was appropriate. The Service collects a variety of diversity-related data and has a number of initiatives under way in response to the Aguirre report that are designed to improve its data collection methods and use as well as to enhance its ability to meet its diversity goals and objectives. The Service is also in the process of establishing targets and measures to use in assessing its progress toward meeting its diversity goals and objectives. However, the Service does not have reliable data on the flow of applicants through its promotion processes that would help it to identify and remove any barriers to the promotion of women and minorities. The Service collects a wide variety of diversity data that are primarily related to its program areas, such as Purchasing and Materials. Managers of these program areas, in coordination with the Service’s Diversity Development Department, are to use these data to help achieve program goals and Service diversity goals. For example, the Purchasing and Materials Department is to collect data on the dollar size and number of contracts awarded to women and minority-owned businesses. The Aguirre report, while acknowledging that the Service collects a substantial amount of diversity-related data, made a number of comments, observations, and recommendations to the Service related to gathering, using, and monitoring such data. At least 5 of the Service’s 23 initiatives (initiatives 5, 6, 8, 18, and 22) involve some of the issues raised by Aguirre about gathering and using diversity-related data. For example, Aguirre observed that the Service did not systematically track credit card purchases by gender or EEO group and thus data on the differential impact of the credit card program on women and minority contractors are not available. The Service plans to address this issue through initiative 18, which is aimed at improving supplier diversity. In November 1998, the Service released its 1999 Annual Performance Plan related to its performance goals, objectives, and associated measures as part of its implementation of the Government Performance and Results Act of 1993 (Results Act). Within the plan, the Service identified a goal of improving employee and organizational effectiveness. The plan also stated that one of the subcomponents of that goal was the strategy to “manage and develop human capital.” Under that strategy, the plan identified the need to “achieve a diverse workforce.” Further, the Annual Performance Plan stated that based on the Aguirre study’s findings and recommendations, the Service had prepared a diversity development action plan to promote the hiring of women and minorities, improve recruitment hiring and promotion activities, and develop indicators to measure progress linked to this strategy. In addition, the Service’s Diversity Business Plan, dated December 3, 1998, supports the Service’s strategic plan. The business plan contains four principal diversity objectives, which, according to Diversity Development officials, are to be used in partnership with other organizational functions to develop programs and initiatives that will help achieve Service diversity goals. The four objectives are (1) articulate a clear diversity message; (2) ensure the representation of all employee groups in all levels of Postal Service employment; (3) create a work environment that is free from discrimination and sexual harassment; and (4) establish and maintain a strong, competitive, and diverse supplier base. According to the Manager of Diversity Policy and Planning, now that the business plan has been approved, the Service is in the beginning stages of developing specific targets and measures that would help the Service track its progress in meeting its diversity goals and objectives. According to the Service, methods to evaluate and measure success will be completed no later than March 30, 1999. Along with the establishment of diversity goals and objectives, the establishment of specific targets and measures will help the Service to focus the efforts of its numerous organizational units, achieve accountability, gauge progress, and meet goals. Although the Service has had a requirement for many years that its managers are to collect applicant data for EAS promotions and enter that data into a central electronic database, according to the Service, most locations have fallen behind in entering these data into the system. Thus, the Service has not been in the best position to analyze data on women and minorities as they move, or do not move, through the Service’s promotion process or to determine if and for what reason impediments or barriers exist to the promotion of women and minorities to higher levels of responsibility in the Service, generally, and within the EAS, specifically. The Vice President of Human Resources, in February 1997, sent a memorandum to area and district human resource managers reminding them that the requirement to collect applicant-flow data was still effective. She noted that such information was critical to Service efforts to examine the promotion process for continuous improvement. Although recognizing that managers were facing various priorities, she asked that managers develop a plan for collecting and entering past applicant data into the Promotion Report System. She also noted that this automated system was the source of data for the Applicant Flow Tracking System (AFTS), a system vital to the Diversity Development Department’s responsibility for reporting promotion demographics. According to a manager in the Service’s Human Resources Department, the Service has had a centralized, computer-based tracking system in place for the last 10 years—the AFTS—which is to track diversity data related to promotions within the Service. He acknowledged, however, that participation in this system varies across Service units. Some units have consistently entered the data into the AFTS as required, while others have never entered the data. Another manager in Human Resources said that this inconsistent use of the AFTS and subsequent incomplete data in the system have occurred because unit managers have few incentives to see that the data are entered into the system because the system is not tied to any essential information system, such as accounting and payroll or the employee master file. In addition, he said that there have been few or no consequences to these managers for not doing so. Because of the unreliability of the AFTS database, the Service has to use the Employee Master File and a separate personnel action database to obtain race, ethnicity, and gender data for those applicants who are promoted; the Service cannot readily compile and use this information on applicants seeking promotion. A reliable and complete database on all applicants would (1) provide an essential baseline against which to assess the promotion progress of specific EEO groups and (2) help the Service identify and remove or reduce the impact of barriers to the promotion of women and minorities. For example, during our initial review in response to your request, we noted that there were no Hispanic women applicants for promotion to EAS levels 17 and above in the Service’s Atlanta performance cluster in fiscal year 1997. The Service could use this type of information to (1) determine whether any problems or barriers existed in the cluster that had caused this situation, and if so, (2) take appropriate corrective action. In fiscal year 1997, overall women and minority representation in the Service’s cluster-level workforce did not parallel that of the 1990 CLF. Relative to their representation in the CLF, several specific EEO groups were fully represented, while others were underrepresented. Also, in fiscal year 1997, women and minorities were generally promoted to EAS 17 and above positions in percentages higher than or close to their workforce representation in the three workforce levels—cluster, headquarters, and area offices. As of September 1997, women and minorities were present in all EAS 17 and above positions and generally had been promoted to EAS 17 and above positions during 1997 in the three workforce levels. Nonetheless, as of September 1997, women and minority representation was generally lower in EAS 17 and above positions than it was in EAS 11 through 16 positions. Overall, given the short time frame and preselection of sites that resulted in certain study limitations, we believe that the multiple methodologies Aguirre used for its study were reasonable, relevant, and appropriate. However, Aguirre’s finding that a glass ceiling appeared to exist at positions beginning at EAS 17 could be misleading. Evidence that Aguirre cited to support this finding was not convincing, and according to our analysis, women and minorities were generally represented in and were being promoted to EAS 17 and above positions, albeit at varying percentages, for the period we reviewed. Neither the Service nor we could determine the exact number of recommendations made by Aguirre. Nevertheless, the Service is making progress in implementing the 23 initiatives it developed in response to the Aguirre report, which are aimed at strengthening its diversity program. We believe that the Service’s ongoing plan to continue monitoring the implementation of policies, processes, procedures, and plans covered by its 23 initiatives is especially important given the Service’s designation of some initiatives as being completed when such policies, processes, procedures, and plans have been developed and approved although specific actions required by some of these initiatives may still be ongoing. Service initiatives to better capture and use data in response to the Aguirre study appear reasonable. However, the Service has not yet (1) established and implemented targets and measures for tracking the Service’s progress in meeting its diversity goals and objectives or (2) fully captured or used EEO data on applicants as they progress, or do not progress, through the Service’s promotion process. The Service has developed diversity goals and objectives, and now that its Diversity Business Plan has been approved, is in the process of developing specific targets and measures for assessing its progress in meeting its goals and objectives. However, the Service is not capturing reliable EEO data on promotion applicants’ progress through the promotion process. Although we recognize that collecting and using EEO data on promotion applicants will require additional effort, such data are important for identifying problems and barriers affecting women and minorities in the promotion process. We recommend that the Postmaster General ensure that appropriate Service officials capture EEO group data in the AFTS and use these data to help improve the Service’s diversity program, including the identification of any barriers that might impede promotions to high-level EAS positions. On February 4, 1999, we were informed by the Postal Service that the Vice President of Diversity Development and the Vice President of Human Resources concurred with the information provided in the draft report. In addition, the Vice President of Human Resources stated that, in response to our recommendation, she would reemphasize to the field the need to enter data into the Promotion Report System, which is the source of the data for the AFTS. Also she stated that once the data are complete and reliable, they can be used as a tool to identify the point that impedes the promotions of applicants to high-level EAS positions. On January 28, 1999, Aguirre provided written comments stating that it found our report to be instructive and informative. Aguirre noted the conditions under which its study was done, such as a charged atmosphere at the Service and the short time frame for the study. Aguirre also noted differences between the scope of its study and ours, such as its (1) use of fiscal year 1996 data compared to our use of fiscal year 1997 data and (2) inclusion of PCES data while our review did not. Aguirre also pointed out that it found clear distinctions in perceptions about the types of positions within the EAS levels, and that to do a thorough analysis, one should look at these differences. For example, Aguirre said it found that women were overrepresented in the attorney area and in rural postmaster jobs and underrepresented in more “power and influence” positions. We believe that Aguirre was suggesting that these differences in scope could account for differences between the results of its study and ours. We used fiscal year 1997 data in our analysis because it was the latest period for which complete data were available. We did not include PCES positions in our analysis because we were asked to analyze the Service’s EAS workforce. An analysis of any perceived or actual differences in representation of women and minorities among types of EAS positions was beyond the scope of our review. Nevertheless, even with these differences in scope, we do not believe that there were significant differences between the results of our work and Aguirre’s study results in those areas that we both addressed. Both reports point out that women and minorities were less represented in higher EAS positions than they were in lower EAS positions. In addition, our report does not take issue with Aguirre’s view that barriers may exist to the promotion of women and minorities to high-level EAS positions. Aguirre further stated that it stood behind its conclusion that there seemed to be a drop in the numbers of women and minorities somewhere around the EAS 17 through 22 level based on data presented in its report. Aguirre said that these data were coupled with the views of Service employees it interviewed who believed that a barrier, or “in their terms, a glass ceiling” existed near or around this EAS level. However, our concern is that Aguirre’s use of the term glass ceiling in its report could be misleading because (1) Aguirre did not define the term glass ceiling in its report; (2) the data in its report did not, in our view, support the existence of a glass ceiling as defined in the general sense, that is, an upper limit beyond which few or no women and minorities could advance; and (3) data in both Aguirre’s report and in our report showed that women and minorities were represented in and were promoted to levels above EAS 17, showing the advancement of women and minorities. The Postal Service raised a similar concern about Aguirre’s use of the term glass ceiling. Nevertheless, we agree with Aguirre that opportunity may exist for the Service to increase diversity at higher EAS levels, and our report recommends that the Service ensure that appropriate EEO group data are captured and used so that any barriers impeding the promotion of women and minorities to high-level EAS positions can be identified. Aguirre said that our report lacked a discussion of the “feeder flow” from which Postal employees move into higher level EAS positions. We believe, however, that our report addressed this issue, at least in part, through our analysis of the diversity of the Service’s EAS 11 through 16 workforce, which forms the pool from which promotions to EAS 17 and above positions would likely come. Finally, Aguirre provided several technical comments, which we considered and included in our report as appropriate. We are sending copies of this report to the Chairman and Ranking Minority Member of the Subcommittee on the Postal Service, House Committee on Government Reform; the Chairman and Ranking Minority Member of the Subcommittee on International Security, Proliferation, and Federal Services, Senate Committee on Governmental Affairs; the Postmaster General; and Aguirre International. We will also make copies available to others on request. If you have any questions concerning this report, please call me on (202) 512-8387. Major contributors to this report are listed in appendix IV. This report, which follows our previous letter on selected promotions of women and minorities to Executive and Administrative Schedule (EAS) management-level positions, provides (1) information about the overall extent to which women and minorities have been promoted to or are represented in EAS management-level positions in the Postal Service; (2) our observations on the methodology used by a private contractor, Aguirre International, to study workforce diversity at the U.S. Postal Service; (3) the status of the Service’s efforts to address the recommendations in the Aguirre report; and (4) our analysis of whether the Service could better capture and use data to achieve its diversity objectives. To determine the overall extent to which women and minorities have been promoted to or are represented in EAS management-level jobs, we obtained Service workforce statistics from the its Diversity Development Department and annual promotion statistics for career-level employees, with the exception of the Postal Career Executive Service (PCES), from the Human Resources Information Systems Office. The Diversity Development Department, in conjunction with the Service’s Minneapolis Data Center, provided us with data tapes containing information related to the equal employment opportunity (EEO) composition of the Service career-level workforce for Service fiscal years 1993 through 1997. We chose to focus our analysis on these years since major downsizing and other changes occurred in the Service in 1992 because of an extensive reorganization. Data from fiscal year 1998 were not available at the time of our analysis. The data we used included EAS level; race, national origin, and gender; location of employee; number of employees by EEO group; and civilian labor force (CLF) statistics for each EEO group. We did not verify these data by comparing them to original source documents. We obtained information on promotions from the Service’s Human Resource Information Office; this information was compiled from the Employee Master and Payroll Accounting files. Using the “nature of action” code from Forms 50, Notice of Personnel Action, we identified career-level employees who had been promoted, by EAS level, throughout the Service. We used this information to assess the extent of promotions to specific EAS positions by EEO groups in the Service. Our limited verification of this promotion data against the promotions reviewed at the three areasreported on in our previous letter showed it to be accurate. We used this information to construct a workforce profile by EEO group at three workforce levels—headquarters, area offices, and performance clusters. In our analysis, we included all career-level employees from each performance cluster; employees reporting to area offices, whether they were located in an area office or a cluster facility; and headquarters’ employees, including employees physically housed at L’Enfant Plaza in Washington, D.C., as well as those reporting to headquarters but located elsewhere. We analyzed data provided by the Service for the three groups of employees: (1) cluster-level employees, who represented 732,112 (or 95.7 percent) of the about 765,000 career-level employees at the Service at the end of fiscal year 1997; (2) area office employees, who represented 21,864 (2.9 percent) of the career-level employees; and (3) headquarters’ employees, who represented 10,707 (1.4 percent) of the career-level employees. We looked at employees in the three workforce levels because responsibility and authority for diversity is separated into these three levels. To provide some context for the results of our analysis, we first compared the 1997 Service data with CLF data from the 1990 decennial census separately for the three workforce levels of employees. We used figures from the 1990 census because this was the comparative baseline used by the Service and by Aguirre International in its study. We recognize there are more recent estimates that would have accounted for the changes in the population, especially in the Hispanic and Asian subpopulations in certain areas. However, these estimates are not broken down into a geographic level that is comparable to Service performance clusters. Regarding promotions to women and minorities as well as the Aguirre report’s finding of a glass ceiling at EAS 17 and above positions, we did several analyses: First, we considered how the representation of each of the 10 EEO groups in EAS 17 and above positions had changed between fiscal years 1993 and 1997. Second, we considered whether the percentage of employees in each of the 10 EEO groups (i.e., white, black, Hispanic, Asian, and Native American men and women) that were promoted to EAS 17 and above positions during fiscal year 1997 were greater or less than the percentages of employees in each of the 10 EEO groups that were employed in those positions at the beginning of fiscal year 1997 (before the promotions). We computed a ratio statistic to express the percentage of employees in each of the 10 EEO groups that were promoted to EAS 17 and above positions during fiscal year 1997 compared with the percentage of employees in each group already employed in EAS 17 and above positions before the promotions. The positive ratio of 1.23 for black men, for example, was the percentage of all promotions going to black men (10.85 percent) divided by the percentage of the cluster-level workforce, which was black men at EAS 17 and above (8.81 percent) at the beginning of fiscal year 1997. These same comparisons and ratios were done separately for cluster, headquarters, and area office employees. Finally, we considered how the representation of the various groups of women and minorities in higher level EAS positions (17 through 30) compared with their representation in the lower level EAS positions (11 through 16). To provide observations on the methodology used by Aguirre International in its study of workforce diversity at the Service, we reviewed the Aguirre report and the methodologies used in relation to the study’s objectives, limitations, and findings. In addition, we reviewed both the comments from the Advisory Diversity Team on Aguirre’s draft report and Aguirre’s response to Service questions. We also interviewed the Project Director for the Aguirre study. We reviewed a copy of the contract and statement of work between the Service and Aguirre International, and discussed the report with the two secretaries to the Board of Governors. We also looked at the Aguirre study’s methodology in relation to the U.S. Equal Employment Opportunity Commission’s guidance and our previous work on diversity-related issues. To provide information on the status of the Service’s efforts to address the Aguirre report’s recommendations, we reviewed the Service’s response to the study as well as several status reports prepared by the Diversity Oversight Committee, which is a Servicewide committee established to oversee the implementation of the Service’s response to the Aguirre report. We also interviewed the Vice President of Diversity Development as well as the manager in charge of the Supplier Development and Diversity program in the Purchasing and Materials Department concerning the Aguirre report’s recommendations, among other things. We reviewed the Service’s action plan, which laid out 23 initiatives and was prepared in response to the Aguirre report. We limited our verification of the implementation status of the 23 initiatives to obtaining and reviewing available relevant documents, such as plans and directives, prepared by the Service. To determine whether the Service could improve its capture and use of diversity-related data, we reviewed (1) diversity-related data historically collected and used by the Service; (2) Aguirre’s recommendations related to data collection and the Service’s response to them; (3) Service documents prepared in response to the Results Act; and (4) Service documents related to the AFTS. In addition, we interviewed knowledgeable Service officials and Aguirre’s Project Director. We did our work from July 1998 through January 1999 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Postmaster General and Aguirre International’s Director of Operations. The Postal Service’s oral comments and Aguirre’s written comments are discussed near the end of the letter. The following tables present information on women and minority representation at the three Service workforce levels—the cluster, headquarters, and area office levels—and includes the following comparisons for women and minorities: representation at the three workforce levels as of the end of fiscal year 1997 compared with their representation in the 1990 CLF (table II.1); changes in women and minority representation at EAS 17 and above positions at the three workforce levels for fiscal years 1993 and 1997 (table II.2); promotions to EAS 17 and above positions as of the end of fiscal year 1997 compared with women and minority representation in those positions at all three workforce levels during fiscal year 1997 before the promotions (table II.3); and women and minority representation in EAS 17 and above positions compared with their representation in EAS 11 through 16 positions (table II.4). Table II.1 shows that when comparing Service data as of the end of fiscal year 1997 with CLF data from the 1990 decennial census, black and Asian men and women were fully represented, while white and Hispanic women and Native American men were underrepresented at headquarters, in the area offices, and among cluster-level employees. Native American women were also underrepresented among the large group of cluster employees as well as among headquarters personnel. In addition, white men were underrepresented among area office employees, while Hispanic men were underrepresented at the headquarters and area office levels. As shown in table II.2, we determined how the representation of the 10 EEO groups in the higher EAS positions had changed between fiscal years 1993 and 1997. White and black men were the only EEO groups that decreased in their representation among all three workforce levels at EAS 17 and above positions during this period. Native American men also decreased in their representation among employees at high-level EAS positions at headquarters and area offices, and Asian men decreased slightly in their representation among employees at high-level EAS positions at the area offices. As shown in table II.3, we determined whether the percentages of employees in each of the 10 EEO groups that were promoted to EAS 17 and above positions during fiscal year 1997 were greater or less than the percentages of employees in each of the 10 EEO groups employed at those levels at the beginning of fiscal year 1997 (before the promotions). Asian women were the only group other than white men, among cluster- level employees, who were not promoted during fiscal year 1997 to EAS 17 and above positions in numbers that would have been sufficient to increase their representation in those higher EAS positions. This was also true for black men, Asian women, and Native American men among headquarters’ employees. Among area office employees, the percentages of white women and Hispanic and Native American men and women promoted to EAS 17 and above positions were not as large as the percentages employed at those higher levels. White men were the only group for which percentages of promotions to 17 and above positions were lower than the percentages of white men already employed in those positions across all three workforce levels. As shown in table II.4, we determined whether, as of the end of fiscal year 1997, the representation of various EEO groups of minority men and women employed in EAS 17 and above positions resembled their representation in EAS 11 through 16 positions. Among cluster-level employees and headquarters employees, all EEO groups of women—but none of the groups of men, except black men at headquarters and Asian men at the cluster level—were less well represented in EAS 17 through 30 positions than they were in EAS 11 through 16 positions. Among area office employees, Hispanic men and Asian and Native American men and women fared better while black men, similar to black and Hispanic women, were less well represented in EAS 17 and above positions compared with the EAS 11 through 16 positions. Table III.1 provides the details of the primary methodologies used by Aguirre researchers to develop answers to the eight research questions on which the study was based. As shown in the table, Aguirre researchers used multiple methods to research the questions, including extensive data analysis. Table III.1: Aguirre Study’s Eight Research Areas and the Methodological Approach Taken Eight research areas (1) How does the composition of the postal workforce by race/national origin and gender compare to the population nationally and locally? Methodologies used by Aguirre researchers Developed statistical analysis of (1) Census CLF dataand (2) Service workforce data at national and local levels Created models for mapping Census data into race and national origin Did Service workforce trend analysis Reviewed Service written policies and practices for hiring Interviewed Service national and local staff Analyzed Service workforce data Compared local Service workforce data with CLF data Interviewed potential employees (3) Does the Diversity Reporting System provide accurate information on the race and national origin of Service employees? Reviewed written Service policies and practices in assigning employees to race/national origin categories; also interviewed relevant Service staff at national and local levels Analyzed two data files: Active Employee Reference file and Personnel Actions file, extracted from Notice of Personnel Action, Form 50 Surveyed sample of employees selected from Diversity Reporting System to verify race and national origin (4) Do promotion policies and practices result in promotions that are proportionate to the number of minority groups represented in the workforce, nationally and locally? Reviewed Service’s written policies and practices for promotions Interviewed Service staff at national and local levels Analyzed Service workforce data for distribution of annual promotions by level, EEO group, and compared the data with CLF data (5) How well do Training and Development Programs address diversity needs? Interviewed training and diversity staff in each of the 10 sites as well as in (6) How effectively does Postal Service contracting and subcontracting with minority-owned business support diversity goals, nationally and locally? Eight research areas (7) How does the Postal Service Diversity Program compare with those of other large organizations? Methodologies used by Aguirre researchers Compared Service’s diversity program in the area of contracting with that of Compared Service’s diversity program with those of other companies that have achieved success with diversity (e.g., Motorola, Allstate, and Harvard Pilgrim Health Care) (8) What strategic direction should the Diversity Program take? Identified best practices used by other organizations in the private sector reported to have successful diversity programs Identified promising practices used in Service’s Diversity program Identified certain organizations’ diversity programs/objectives as models against which the Service can compare its strategies, etc. William R. Chatlos, Senior Social Science Analyst Douglas Sloane, Senior Social Science Analyst Hazel Bailey, Evaluator (Communications Analyst) Sherrill H. Johnson, Assistant Director Billy W. Scott, Evaluator-in-Charge The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the promotion of women and minorities to high-level Executive and Administrative Schedule (EAS) management positions (EAS 17 and above) in the U.S. Postal Service (USPS), focusing on: (1) the overall extent to which women and minorities have been promoted to or are represented in EAS 17 and above positions in USPS; (2) GAO's observations on the methodology used by a private contractor, Aguirre International, to study workforce diversity at USPS; (3) the status of USPS' efforts to address the recommendations contained in the Aguirre report; and (4) GAO's analysis of whether USPS could better capture and use data to achieve its diversity objectives. GAO noted that: (1) at the end of fiscal year (FY) 1997, black and Asian men and women and Hispanic men were fully represented while Hispanic women, Native American men and women, and white women were underrepresented in USPS at the cluster level when compared with the civilian labor force; (2) representation of women and minorities at the cluster level in EAS 17 and above positions increased between fiscal years (FY) 1993 and 1997, with the exception of black men whose representation decreased; (3) in FY 1997, women and all minority groups, except Asian women, at the cluster lever were promoted to EAS 17 and above positions at higher rates than women and minority groups were represented in those EAS positions; (4) despite this progress, the overall representation of women and minorities at the cluster level in EAS 17 and above positions was almost 20 percent lower than their representation in EAS 11 through 16 positions at the end of (FY) 1997; (5) similar comparisons at the headquarters and area office workforce levels showed some variations regarding the representation of specific equal employment opportunity (EEO) groups; (6) GAO believes that the methodologies used by Aguirre International were generally reasonable, appropriate, and relevant given the parameters established for the study and the complexities surrounding the sensitive issue of diversity in such a large organization; (7) however, GAO believes that Aguirre's finding of a glass ceiling beginning at EAS 17 positions could be misleading; (8) USPS reviewed the Aguirre report and developed 23 initiatives that it believed addresses the report's major issues and recommendations; (9) USPS believes its 23 initiatives will significantly strengthen its diversity program and address most of Aguirre's concerns; (10) USPS believes that it is generally on or ahead of its schedule for implementing these initiatives; (11) by the spring of 1999, USPS plans to create an ongoing monitoring process to ensure full implementation of its initiatives, which will result in revised scopes, completion dates, and implementation status for some of the initiatives; (12) USPS has recently developed broad goals and objectives for its diversity program, but it has not yet established specific targets and measures for determining its progress toward meeting its diversity goals and objectives; and (13) USPS officials said that specific targets and measures would be established no later than March 30, 1999.
The 46 states reported receiving a total of nearly $52.6 billion in payments in varying annual amounts from fiscal year 2000 through fiscal year 2005. Of the nearly $52.6 billion, about $36.5 billion were payments from the tobacco companies and about $16 billion were securitized proceeds that 15 states arranged to receive, as shown in table 1. The tobacco companies’ annual payments are adjusted based on several factors contained in the Master Settlement Agreement that include fluctuations in the volume of cigarette sales, inflation, and other variables, such as the participating companies’ share of the tobacco market. Declining tobacco consumption alone would result in lower Master Settlement Agreement payments than originally expected. Tobacco consumption has declined since the Master Settlement Agreement was signed in 1998—by about 6.5 percent in 1999 alone—mostly due to one- time increases in cigarette prices by the tobacco companies after the agreement took effect. Analysts project that, in the future, tobacco consumption will decline by an average of nearly 2 percent per year. As a result, tobacco consumption is estimated to decline by 33 percent between 1999 and 2020. However, the Master Settlement Agreement also includes an inflation adjustment factor that some analysts have estimated increases payments more than any decreases caused by reduced consumption. The inflation adjustment equals the actual percentage increase in the Consumer Price Index for the preceding year or 3 percent, whichever is greater. The effect of these compounding increases is potentially significant, especially given that the payments are made in perpetuity. Assuming a 3-percent inflation adjustment and no decline in base payments, settlement amounts received by states would double every 24 years. Also, several tobacco companies’ interpretation of the provision that addresses participants’ market share led them to lower their payments in 2006. Under this provision, an independent auditor determined that participating tobacco companies lost a portion of their market share to non-participating companies. An economic research firm determined that the Master Settlement Agreement was a significant factor in these market share losses. Based on these findings, several participating companies reduced their fiscal year 2006 payments by about a total of about $800 million. Many states have filed suit to recover these funds. Each state’s share of the tobacco companies’ total annual payments is a fixed percentage that was negotiated during the settlement. These percentages are based on two variables related to each state’s smoking- related health care costs, which reflect each state’s population and smoking prevalence. In general, the most populous states receive a larger share of the tobacco companies’ total annual payments than the less populous states. For example, California and New York each receive about 13 percent, while Alaska and Wyoming each receive less than 1 percent. However, these percentages are not strictly proportional to population. In addition to the annual payments states receive, the Master Settlement Agreement requires that a Strategic Contribution Fund payment begin in 2008 and continue through 2017. The base amount of each year’s Strategic Contribution Fund payment is $861 million, which will be adjusted for volume and inflation and shared among the states. Strategic Contribution Fund payments are intended to reflect the level of the contribution each state made toward final resolution of their lawsuit against the tobacco companies. They will be allocated to the states based on a separate formula developed by a panel of former state attorneys general. The Master Settlement Agreement imposed no restrictions on how states could spend their settlement payments and, as such, the states have allocated their payments to a wide variety of activities, with health-related activities the largest among them. As part of their decision making on how to spend their payments, some states established planning commissions and working groups to develop recommendations and strategic plans for allocating their states’ payments. In six states, voter-approved initiatives restricted use of the funds and, in 30 states, the legislatures enacted laws restricting their use. Overall, we identified 13 general categories to which states have allocated their Master Settlement Agreement payments, as shown in table 2. Appendix I provides more details on the categories to which states allocated their payments. States allocated the largest portion of their payments—about $16.8 billion, or 30 percent of the total payments—to health-related activities. To a closely related category—tobacco control—states allocated $1.9 billion, or 3.5 percent of their total payments. States allocated the second largest portion of their payments—about $12.8 billion or 22.9 percent—to cover budget shortfalls. Some states told us that they viewed the settlement payments as an opportunity to fund needs that they were not able to fund previously due to the high cost of health care. Figure 1 illustrates the relative magnitude of the categories receiving allocations. The seven largest categories of allocations, in descending order, are health, budget shortfalls, general purposes, infrastructure, education, debt service on securitized funds, and tobacco control. States’ allocations to these categories have varied considerably from year to year—with some categories showing wide fluctuations. For example, for budget shortfalls, the states allocated from 2 to 44 percent of the total payments. On the other hand, for health care, the states allocated from 20 to 38 percent of the total payments. Figure 2 shows these annual changes for these seven categories. Information about how states have allocated their Master Settlement Agreement payments follows. Health. From fiscal years 2000 through 2005, states allocated about $16.8 billion of their Master Settlement Agreement payments to a variety of health care programs, including Medicaid; health insurance; cancer prevention, screening, and treatment; heart and lung disease; and drug addiction. Over this period, the amounts states allocated to health care ranged from about $1.9 billion in fiscal year 2005 to nearly $4.8 billion in fiscal years 2000-2001 combined. In fiscal year 2005, the most recent year for which we collected actual data, 36 of the 46 states allocated some of their Master Settlement Agreement payments to health care. Of the 36 states, 5 states allocated two-thirds or more of their payments to health care; 19 states allocated one-third to two-thirds; and 12 states allocated less than one-third. Ten states did not allocate any of their payments to health care activities. In fiscal year 2005, Pennsylvania, Illinois, Michigan, and Maryland allocated larger amounts to health care than the other states. Pennsylvania allocated over $326 million of its payments to health care programs for adult health insurance, uncompensated care, medical assistance for workers with disabilities, and community medical assistance. Illinois allocated nearly $204 million of its payments to health care, citing Medicaid drugs as a key program that would receive funds. Michigan allocated over $185 million of its payments to areas such as elder pharmaceutical assistance and Medicaid support programs. Maryland allocated nearly $100 million of its payments to areas such as Medicaid; cancer prevention, screening, and treatment; heart and lung disease; and drug addiction. Budget Shortfalls. From fiscal years 2000 through 2005, states allocated about $12.8 billion of their Master Settlement Agreement payments to budget shortfalls. Over this period, the amounts the states allocated to budget shortfalls ranged from a high of about $5.1 billion, or 44 percent of the total payments in fiscal year 2004, to $261 million, or 4 percent in fiscal year 2005. In fiscal year 2005, only 4 of the 46 states allocated some of their Master Settlement Agreement payments to budget shortfalls. Of these states, only Missouri allocated more than one-third of its total payments–– about $72 million––to budget shortfalls. General Purposes. From fiscal years 2000 through 2005, states allocated about $4 billion of their Master Settlement Agreement payments to general purposes, including law enforcement, community development activities, technology development, emergency reserve funds, and legal expenses for enforcement of the Master Settlement Agreement. Over this period, the amounts states allocated to general purposes ranged from $623 million, or about 5 percent of the total payments they allocated in fiscal years 2000- 2001 combined, to about $1.1 billion, or 8 percent in fiscal year 2003. In fiscal year 2005, 27 of the 46 states allocated some of their Master Settlement Agreement payments to general purposes. Of these 27 states, 4 states allocated two-thirds or more of their total payments to general purposes; 2 states allocated one-third to two-thirds; and 21 states allocated less than one-third. Nineteen states did not allocate any of their payments to general purposes. Massachusetts, Tennessee, Connecticut, and Colorado allocated the largest amounts to general purposes in fiscal year 2005. Massachusetts allocated nearly $255 million of its payments to general purposes for its General Fund, Tennessee allocated nearly $157 million of its payments to its General Fund, and Connecticut allocated about $113 million of its payments to its General Fund. Colorado allocated about $64.5 million of its payments to general purposes, but did not specify which programs would receive funds. Infrastructure. From fiscal years 2000 through 2005, states allocated about $3.4 billion of their Master Settlement Agreement payments to infrastructure-related activities, including capital maintenance on state owned facilities, regional facility construction, and water projects. Over this period, the amounts states allocated to infrastructure have ranged from $31 million, or about 1 percent of the total payments in fiscal year 2005, to about $1.2 billion, or 10 percent in fiscal year 2002. In fiscal year 2005, 5 of the 46 states allocated some of their Master Settlement Agreement payments to infrastructure. Of these 5 states, North Dakota was the only state that allocated more than one-third of its total payments to infrastructure. North Dakota, Hawaii, and Kentucky allocated the largest amounts to infrastructure in fiscal year 2005. North Dakota allocated about $10.5 million of its payments to infrastructure for work on water projects. Hawaii allocated approximately $10 million of its payments to infrastructure, citing debt service on University of Hawaii revenue bonds issued for the new Health and Wellness Center as a primary program that would receive funds. Kentucky allocated $6.1 million of its payments to service debt on such things as water resource development and a Rural Development Bond Fund. Education. From fiscal years 2000 through 2005, states allocated about $3 billion of their Master Settlement Agreement payments to education programs, including early childhood development; special education; scholarships; after-school services; and reading programs. Over this period, the amounts states allocated to education ranged from between $280 million or 2 percent of the total payments in fiscal year 2004, to over $1.1 billion, or 9 percent, in fiscal year 2002. In fiscal year 2005, 16 of the 46 states allocated some of the Master Settlement Agreement payments to education. Of the 16 states, only New Hampshire allocated more than two-thirds of its total payments to education; 4 states allocated between one-third and two-thirds to education; and 11 states allocated less than one-third. Thirty states did not allocate any of their payments to education-related activities. Michigan, New Hampshire, Nevada, and Colorado allocated the largest amounts to education in fiscal year 2005. Michigan allocated over $99 million of its payments to education for Merit Award scholarships and tuition incentive grants for higher education students; the Michigan Educational Assessment Program testing for K-12 students, nursing scholarships, the Michigan Education Savings Plan, and general higher education support. New Hampshire allocated $40 million of its payments to areas such as an Education Trust Fund, which distributes grants to school districts in the state. Nevada allocated about $33 million of its payments to education programs, citing a scholarship program for Nevada students attending Nevada’s higher education institutions as a key recipient. Colorado allocated over $16 million of its payments to education, including its Read to Achieve program. Debt Service on Securitized Funds. From fiscal years 2000 through 2005, states allocated about $3 billion of their Master Settlement Agreement payments to servicing debt on securitized funds. This category consists of amounts allocated to servicing the debt issued when a state securitizes all or a portion of its Master Settlement Agreement payments. Over this period, the amounts states allocated for this purpose have ranged from $271 million, or about 2 percent of the total payments in fiscal year 2002, to about $1.4 billion, or about 24 percent, in fiscal year 2005. In fiscal year 2005, four states—California, Rhode Island, South Carolina, and Wisconsin—allocated 100 percent of their Master Settlement Agreement payments to servicing debt on securitized funds, while New Jersey allocated just under 100 percent. In addition, Alaska, Louisiana, and South Dakota, allocated more than half of their payments for this purpose. In fiscal year 2005, California and New York allocated the largest amounts to servicing debt on securitized funds. Tobacco Control. From fiscal years 2000 through 2005, states allocated about $1.9 billion of their Master Settlement Agreement payments to tobacco control programs, including prevention, cessation, and counter marketing. Over this period, the amounts states allocated to tobacco control ranged from $790 million, or about 6 percent of the total payments in fiscal years 2000-2001 combined, to $223 million, or about 2 percent, in fiscal year 2004. In fiscal year 2005, 34 of the 46 states allocated some of their Master Settlement Agreement payments to tobacco control programs. Of the 34 states, Wyoming allocated more than one-third of its payments to tobacco control, while 33 states allocated less than one-third. Twelve states did not allocate any of their payments to tobacco control-related programs. Pennsylvania and Ohio allocated more than the other states to tobacco control—about $44 million and $37 million, respectively—in fiscal year 2005. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Committee may have. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. For further information about this testimony, please contact Lisa Shames, Acting Director, Natural Resources and Environment at (202) 512-3841 or ShamesL@gao.gov. Key contributors to this statement were Charles M. Adams, Bart Fischer, Jennifer Harman, Natalie Herzog, Alison O’Neill, and Beverly Peterson. To standardize the information reported by the 46 states, we developed the following categories and definitions for the program areas to which states allocated their payments. Budget shortfalls: This category is comprised of amounts allocated to balance state budgets and close gaps or reduce deficits resulting from lower than anticipated revenues or increased mandatory or essential expenditures. Debt service on securitized funds: This category consists of amounts allocated to service the debt on bonds issued when the state securitized all or a portion of its Master Settlement Agreement payments. Economic development for tobacco regions: This category is comprised of amounts allocated for economic development projects in tobacco states such as infrastructure projects, education and job training programs, and research on alternative uses of tobacco and alternative crops. This category includes projects specifically designed to benefit tobacco growers as well as economic development that may serve a larger population within a tobacco state. Education: This category is comprised of amounts allocated for education programs such as day care, preschool, Head Start, early childhood education, elementary and secondary education, after-school programs, and higher education. This category does not include money for capital projects such as construction of school buildings. General purposes: This category is comprised of amounts allocated for attorneys’ fees and other items, such as law enforcement or community development, which could not be placed into a more precise category. This category also includes amounts allocated to a state’s general fund that were not earmarked for any particular purpose. Amounts used to balance state budgets and close gaps or reduce deficits should be categorized as budget shortfalls rather than general purposes. Health: This category is comprised of amounts allocated for direct health care services; health insurance, including Medicaid and the State Children’s Health Insurance Program (SCHIP); hospitals; medical technology; public health services; and health research. This category does not include money for capital projects such as construction of health facilities. Infrastructure: This category is comprised of amounts allocated for capital projects such as construction and renovation of health care, education, and social services facilities; water and transportation projects; and municipal and state government buildings. This category includes retirement of debt owed on capital projects. Payments to tobacco growers: This category is comprised of amounts allocated for direct payments to tobacco growers, including subsidies and crop conversion programs. Reserves/rainy day funds: This category is comprised of amounts allocated to state budget reserves such as rainy day and budget stabilization funds not earmarked for specific programs. Amounts allocated to reserves that are earmarked for specific areas are categorized under those areas—e.g., reserve amounts earmarked for economic development purposes should be categorized in the economic development category. Social services: This category is comprised of amounts allocated for social services such as programs for the aging, assisted living, Meals on Wheels, drug courts, child welfare, and foster care. This category also includes amounts allocated to special funds established for children’s programs. Tax reductions: This category is comprised of amounts allocated for tax reductions such as property tax rebates and earned income tax credits. Tobacco control: This category is comprised of amounts allocated for tobacco control programs such as prevention, including youth education, enforcement, and cessation services. Unallocated: This category is comprised of amounts not allocated for any specific purpose, such as amounts allocated to dedicated funds that have no specified purpose; amounts states chose not to allocate in the year Master Settlement Agreement payments were received that will be available for allocation in a subsequent fiscal year; interest earned from dedicated funds not yet allocated; and amounts that have not been allocated because the state had not made a decision on the use of the Master Settlement Agreement payments. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In the 1990s, states sued major tobacco companies to obtain reimbursement for health impairments caused by the public's use of tobacco. In 1998, four of the nation's largest tobacco companies signed a Master Settlement Agreement, agreeing to make annual payments to 46 states in perpetuity as reimbursement for past tobacco-related health care costs. Some states have arranged to receive advance proceeds based on the amounts that tobacco companies owe by issuing bonds backed by future payments. This testimony discusses (1) the amounts of tobacco settlement payments that the states received from fiscal years 2000 through 2005, the most recent year for which GAO has actual data, and (2) the states' allocations of these payments. We also include states' projected fiscal year 2006 allocations. The Farm Security and Rural Investment Act of 2002 required GAO to report annually, through fiscal year 2006, on how states used the payments made by tobacco companies. GAO based this testimony on five annual surveys of these 46 states' Master Settlement Agreement payments and how they allocated these payments. From fiscal year 2000 through 2005, the 46 states party to the Master Settlement Agreement received $52.6 billion in tobacco settlement payments. Of the $52.6 billion total, about $36.5 billion were payments from the tobacco companies and about $16 billion were advance payments which several states had arranged to receive by issuing bonds backed by their future payments from the tobacco companies. The Master Settlement Agreement imposed no restrictions on how states could spend their payments, and as such, the states have chosen to allocate them to a wide variety of activities. Some states told us that they viewed the settlement payments as an opportunity to fund needs that they were not able to fund previously due to the high costs of health care. States allocated the largest portion of their payments to health care--$16.8 billion or 30 percent--which includes Medicaid, health insurance, hospitals, medical technology, and research. States allocated the second largest portion to cover budget shortfalls--about $12.8 billion or about 22.9 percent. This category includes allocations to balance state budgets or reduce deficits that resulted from lower than anticipated revenues, increased mandatory spending, or essential expenditures. Included among the next largest categories are allocations for infrastructure projects, education, debt service on securitized proceeds, and tobacco control.
SEC’s ability to directly oversee hedge fund advisers is limited to those that are required to register or voluntarily register with SEC as investment advisers. Registered hedge fund advisers are subject to the same disclosure requirements as all other registered investment advisers. These advisers must provide current information to both SEC and investors about their business practices and disciplinary history. Advisers also must maintain required books and records, and are subject to periodic examinations by SEC staff. Meanwhile, hedge funds, like other investors in publicly traded securities, are subject to various regulatory reporting requirements. For example, upon acquiring a 5 percent beneficial ownership position of a particular publicly traded security, a hedge fund may be required to file a report disclosing its holdings with SEC. In December 2004, SEC adopted an amendment to Rule 203(b)(3)-1, which had the effect of requiring certain hedge fund advisers that previously enjoyed the private adviser exemption from registration to register with SEC as investment advisers. In June 2006, a federal court vacated the 2004 amendment to Rule 203(b)(3)-1. According to SEC, when the rule was in effect (from February 1, 2006, through August 21, 2006), SEC was better able to identify hedge fund advisers. In August 2006, SEC estimated that 2,534 advisers that sponsored at least one hedge fund were registered with the agency. Since August 2006, SEC’s ability to identify an adviser that manages a hedge fund has been further limited due to changes in filing requirements and to advisers that chose to retain registered status. As of April 2007, 488, or about 19 percent of the 2,534 advisers, had withdrawn their registrations. At the same time, 76 new registrants were added and some others changed their filing status, leaving an estimated 1,991 hedge fund advisers registered. While the list of registered hedge fund advisers is not all-inclusive, many of the largest hedge fund advisers—including 49 of the largest 78 U.S. hedge fund advisers—are registered. These 49 hedge fund advisers account for approximately $492 billion of assets under management, or about 33 percent of the estimated $1.5 trillion in hedge fund assets under management in the United States. In an April 2009 speech, Chairman Schapiro stated that there are approximately150 active hedge fund investigations at SEC, some of which include possible Ponzi schemes, misappropriations, and performance smoothing. In a separate speech in April, Chairman Schapiro renewed SEC’s call for greater oversight of hedge funds, including the registration of hedge fund advisers and potentially the hedge funds themselves. SEC uses a risk-based examination approach to select investment advisers for inspections. Under this approach, higher-risk investment advisers are examined every 3 years. One of the variables in determining risk level is the amount of assets under management. SEC officials told us that most hedge funds, even the larger ones, do not meet the dollar threshold to be automatically considered higher-risk. In fiscal year 2006, SEC examined 321 hedge fund advisers and identified issues (such as information disclosure, reporting and filing, personal trading, and asset valuation) that are not exclusive to hedge funds. Also, from 2004 to 2008, SEC oversaw the large internationally active securities firms on a consolidated basis. These securities firms have significant interaction with hedge funds through affiliates previously not overseen by SEC. One aspect of this program was to examine how the securities firms manage various risk exposures, including those from hedge fund-related activities such as providing prime brokerage services and acting as creditors and counterparties. SEC found areas where capital computation methodology and risk management practices can be improved. Similarly, CFTC regulates those hedge fund advisers registered as CPOs or CTAs. CFTC has authorized the National Futures Association (NFA), a self-regulatory organization for the U.S. futures industry, to conduct day- to-day monitoring of registered CPOs and CTAs. In fiscal year 2006, NFA examinations of CPOs included six of the largest U.S. hedge fund advisers. In addition, SEC, CFTC, and bank regulators can use their existing authorities—to establish capital standards and reporting requirements, conduct risk-based examinations, and take enforcement actions—to oversee activities, including those involving hedge funds, of broker- dealers, of futures commission merchants, and of banks, respectively. While none of the regulators we interviewed specifically monitored hedge fund activities on an ongoing basis, generally regulators had increased reviews—by such means as targeted examinations—of systems and policies to mitigate counterparty credit risk at the large regulated entities. For instance, from 2004 to 2007, the Federal Reserve Bank of New York (FRBNY) had conducted various reviews—including horizontal reviews— of credit risk management practices that involved hedge fund-related activities at several large banks. On the basis of the results, FRBNY noted that the banks generally had strengthened practices for managing risk exposures to hedge funds, but the banks could further enhance firmwide risk management systems and practices, including expanded stress testing. The federal government does not specifically limit or monitor private sector pension investment in hedge funds and, while some states do so for public plans, their approaches vary. Although the Employee Retirement and Income Security Act (ERISA) governs the investment practices of private sector pension plans, neither federal law nor regulation specifically limit pension investment in hedge funds or private equity. Instead, ERISA requires that plan fiduciaries apply a “prudent man” standard, including diversifying assets and minimizing the risk of large losses. The prudent man standard does not explicitly prohibit investment in any specific category of investment. The standard focuses on the process for making investment decisions, requiring documentation of the investment decisions, due diligence, and ongoing monitoring of any managers hired to invest plan assets. Plan fiduciaries are expected to meet general standards of prudent investing and no specific restrictions on investments in hedge funds or private equity have been established. The Department of Labor is tasked with helping to ensure plan sponsors meet their fiduciary duties; however, it does not currently provide any guidance specific to pension plan investments in hedge funds or private equity. Conversely, some states specifically regulate and monitor public sector pension investment in hedge funds, but these approaches vary from state to state. While states generally have adopted a “prudent man” standard similar to that in ERISA, some states also explicitly restrict or prohibit pension plan investment in hedge funds or private equity. For instance, in Massachusetts, the agency overseeing public plans will not permit plans with less than $250 million in total assets to invest directly in hedge funds. Some states have detailed lists of authorized investments that exclude hedge funds and/or private equity. Other states may limit investment in certain investment vehicles or trading strategies employed by hedge fund or private equity fund managers. While some guidance exists for hedge fund investors, specific guidance aimed at pension plans could serve as an additional tool for plan fiduciaries when assessing whether and to what degree hedge funds would be a prudent investment. According to several 2006 and 2007 surveys of private and public sector plans, investments in hedge funds are typically a small portion of total plan assets—about 4 to 5 percent on average—but a considerable and growing number of plans invest in them. Updates to the surveys indicated that institutional investors plan to continue to invest in hedge funds. One 2008 survey reported that nearly half of over 200 plans surveyed had hedge funds and hedge-fund-type strategies. This was a large increase when compared to the previous survey when 80 percent of the funds had no hedge fund exposure. Pension plans’ investments in hedge funds n part were a response to stock market declines and disenchantment with traditional investment management in recent years. Officials with most of the plans we contacted indicated that they invested in hedge funds, at least in part, to reduce the volatility of returns. Several pension plan officials told us that they sought to obtain returns greater than the returns of the overall stock market through at least some of their hedge fund investments. Officials of pension plans that we contacted also stated that hedge funds are used to help diversify their overall portfolio and provide a vehicle that will, to some degree, be uncorrelated with the other investments in their portfolio. This reduced correlation was viewed as having a number of benefits, including reduction in overall portfolio volatility and risk. While any plan investment may fail to deliver expected returns over time, hedge fund investments pose investment challenges beyond those posed by traditional investments in stocks and bonds. These include the reliance on the skill of hedge fund managers, who often have broad latitude to engage in complex investment techniques that can involve various financial instruments in various financial markets; use of leverage, which amplifies both potential gains and losses; and higher fees, which require a plan to earn a higher gross return to achieve a higher net return. In addition to investment challenges, hedge funds pose additional challenges, including: (1) limited information on a hedge fund’s underlying assets and valuation (limited transparency); (2) contract provisions which limit an investor’s ability to redeem an investment in a hedge fund for a defined period of time (limited liquidity); and (3) the possibility that a hedge fund’s active or risky trading activity will result in losses due to operational failure such as trading errors or outright fraud (operational risk). Pension plans that invest in hedge funds take various steps to mitigate the risks and challenges posed by hedge fund investing, including developing a specific investment purpose and strategy, negotiating important investment terms, conducting due diligence, and investing through funds of funds. Such steps require greater effort, expertise and expense than required for more traditional investments. As a result, according to plan officials, state and federal regulators, and others, some pension plans, especially smaller plans, may not be equipped to address the various demands of hedge fund investing. Investors, creditors, and counterparties have the power to impose market discipline—rewarding well-managed hedge funds and reducing their exposure to risky, poorly managed hedge funds—during due diligence exercises and through ongoing monitoring. Creditors and counterparties also can impose market discipline through ongoing management of credit terms (such as collateral requirements). According to market participants doing business with larger hedge funds, hedge fund advisers have improved disclosure and become more transparent about their operations, including risk management practices, partly as a result of recent increases in investments by institutional investors with fiduciary responsibilities, such as pension plans, and guidance provided by regulators and industry groups. Despite the requirement that fund investors be sophisticated, some market participants suggested that not all prospective investors have the capacity or retain the expertise to analyze the information they receive from hedge funds, and some may choose to invest in a hedge fund largely as a result of its prior returns and may fail to fully evaluate its risks. Since the near collapse of LTCM in 1998, investors, creditors, and counterparties have increased their efforts to impose market discipline on hedge funds. Regulators and market participants also said creditors and counterparties have been conducting more extensive due diligence and monitoring risk exposures to their hedge fund clients since LTCM. The creditors and counterparties we interviewed said that they have exercised market discipline by tightening their credit standards for hedge funds and demanding greater disclosure. However, regulators and market participants also identified issues that limit the effectiveness of market discipline or illustrate failures to properly exercise it. For example, most large hedge funds use multiple prime brokers as service providers. Thus, no one broker may have all the data necessary to assess the total leverage used by a hedge fund client. In addition, the actions of creditors and counterparties may not fully prevent hedge funds from taking excessive risk if these creditors’ and counterparties’ risk controls are inadequate. For example, the risk controls may not keep pace with the increasing complexity of financial instruments and investment strategies that hedge funds employ. Similarly, regulators have been concerned that in competing for hedge fund clients, creditors sometimes relaxed credit standards. These factors can contribute to conditions that create the potential for systemic risk if breakdowns in market discipline and the risk controls of creditors and counterparties are sufficiently severe that losses by hedge funds in turn cause significant losses at key intermediaries or instability in financial markets. Although financial regulators and market participants recognize that the enhanced efforts by investors, creditors, and counterparties since LTCM impose greater market discipline on hedge funds, some remain concerned that hedge funds’ activities are a potential source of systemic risk. Counterparty credit risk arises when hedge funds enter into transactions, including derivatives contracts, with regulated financial institutions. Some regulators regard counterparty credit risk as the primary channel for potentially creating systemic risk. At the time of our work in 2007, financial regulators said that the market discipline imposed by investors, creditors, and counterparties is the most effective mechanism for limiting the systemic risk from the activities of hedge funds (and other private pools of capital). The most important providers of market discipline are the large, global commercial and investment banks that are hedge funds’ principal creditors and counterparties. As part of the credit extension process, creditors and counterparties typically require hedge funds to post collateral that can be sold in the event of default. OCC officials told us that losses at their supervised banks due to the extension of credit to hedge funds were rare. Similarly, several prime brokers told us that losses from hedge fund clients were extremely rare due to the asset-based lending they provided such funds. While regulators and others recognize that counterparty credit risk management has improved since LTCM, the ability of financial institutions to maintain the adequacy of these management processes in light of the dramatic growth in hedge fund activities remained a particular focus of concern. In addition to counterparty credit risk, other factors such as trading behavior can create conditions that contribute to systemic risk. Given certain market conditions, the simultaneous liquidation of similar positions by hedge funds that hold large positions on the same side of a trade could lead to losses or a liquidity crisis that might aggravate financial distress. Recognizing that market discipline cannot eliminate the potential systemic risk posed by hedge funds and others, regulators have been taking steps to better understand the potential for systemic risk and respond more effectively to financial disruptions that can spread across markets. For instance, they have examined particular hedge fund activities across regulated entities, mainly through international multilateral efforts. The PWG has issued guidelines that provide a framework for addressing risks associated with hedge funds and implemented protocols to respond to market turmoil. Finally, in September 2007, the PWG formed two private sector committees comprising hedge fund advisers and investors to address investor protection and systemic risk concerns, including counterparty credit risk management issues. On January 15, 2009, these two committees, the Asset Managers’ Committee and the Investors’ Committee, released their final best practices reports to hedge fund managers and investors. The final best practices for the asset managers establishes a framework on five aspects of the hedge fund business— disclosure, valuation of assets, risk management, business operations, compliance and conflicts of interest—to help hedge fund managers take a comprehensive approach to adopting best practices and serve as the foundation upon which those best practices are established. The final best practices for investors include a Fiduciary’s Guide, which provides recommendations to individuals charged with evaluating the appropriateness of hedge funds as a component of an investment portfolio, and an Investor’s Guide, which provides recommendations to those charged with executing and administering a hedge fund program if one is added to the investment portfolio. In closing, I would like to include a final thought. It is likely that hedge funds will continue to be a source of capital and liquidity in financial markets, by providing financing to new companies, industries and markets, as well as a source of investments for institutional investors. Given our recent experience with the financial crisis, it is important that regulators have the information to monitor the activities of market participants that play a prominent role in the financial system, such as hedge funds, to protect investors and manage systemic risk. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For further information on this testimony, please contact Orice M. Williams on (202) 512-8678 or at williamso@gao.gov.Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 2008, GAO issued two reports on hedge funds--pooled investment vehicles that are privately managed and often engage in active trading of various types of securities and commodity futures and options contracts--highlighting the need for continued regulatory attention and for guidance to better inform pension plans on the risks and challenges of hedge fund investments. Hedge funds generally qualified for exemption from certain securities laws and regulations, including the requirement to register as an investment company. Hedge funds have been deeply affected by the recent financial turmoil. But an industry survey of institutional investors suggests that these investors are still committed to investing in hedge funds in the long term. For the first time hedge funds are allowed to borrow from the Federal Reserve under the Term-Asset Backed Loan Facility. As such, the regulatory oversight issues and investment challenges raised by the 2008 reports still remain relevant. This testimony discusses: (1) federal regulators' oversight of hedge fund-related activities; (2) potential benefits, risks, and challenges pension plans face in investing in hedge funds; (3) the measures investors, creditors, and counterparties have taken to impose market discipline on hedge funds; and (4) the potential for systemic risk from hedge fund-related activities. To do this work we relied upon our issued reports and updated data where possible Under the existing regulatory structure, the Securities and Exchange Commission and Commodity Futures Trading Commission can provide direct oversight of registered hedge fund advisers, and along with federal bank regulators, they monitor hedge fund-related activities conducted at their regulated entities. Although some examinations found that banks generally have strengthened practices for managing risk exposures to hedge funds, regulators recommended that they enhance firmwide risk management systems and practices, including expanded stress testing. The federal government does not specifically limit or monitor private sector plan investment in hedge funds. Under federal law, fiduciaries must comply with a standard of prudence, but no explicit restrictions on hedge funds exist. Pension plans invest in hedge funds to obtain a number of potential benefits, such as returns greater than the stock market and stable returns on investment. However, hedge funds also pose challenges and risks beyond those posed by traditional investments. For example, some investors may have little information on funds' underlying assets and their values, which limits the opportunity for oversight. Plan representatives said they take steps to mitigate these and other challenges, but doing so requires resources beyond the means of some plans. According to market participants, hedge fund advisers have improved disclosures and transparency about their operations as a result of industry guidance issued and pressure from investors and creditors and counterparties. Regulators and market participants said that creditors and counterparties have generally conducted more due diligence and tightened their credit standards for hedge funds. However, several factors may limit the effectiveness of market discipline or illustrate failures to properly exercise it. Further, if the risk controls of creditors and counterparties are inadequate, their actions may not prevent hedge funds from taking excessive risk and can contribute to conditions that create systemic risk if breakdowns in market discipline and risk controls are sufficiently severe that losses by hedge funds in turn cause significant losses at key intermediaries or in financial markets. Financial regulators and industry observers remain concerned about the adequacy of counterparty credit risk management at major financial institutions because it is a key factor in controlling the potential for hedge funds to become a source of systemic risk. Although hedge funds generally add liquidity to many markets, including distressed asset markets, in some circumstances hedge funds' activities can strain liquidity and contribute to financial distress. In response to their concerns regarding the adequacy of counterparty credit risk, a group of regulators had collaborated to examine particular hedge fund-related activities across entities they regulate, and the President's Working Group on Financial Markets (PWG). The PWG also established two private sector committees that recently released guidelines to address systemic risk and investor protection.
As the largest purchaser of goods and services in the federal government, DOD awarded contracts valued at nearly $165 billion in fiscal year 2002. Within the federal government, DOD represented about two-thirds of the federal contract spending reported in fiscal year 2002, as shown in figure 1. Spending at the next three largest federal agencies, the Department of Energy (DOE), the General Services Administration (GSA), and the National Aeronautics and Space Administration (NASA), represented only about half of the remaining 34 percent of federal contract awards during the same period. In 1998, DOD established the CCR database as the primary repository for contractor information shared with other agencies. With minor exceptions, contractors are required to register in the CCR database prior to award of a DOD contract. In addition to a one-time registration process, contractors are required to keep all registered information current, and must confirm the registered information is accurate and complete annually. The CCR database contains a wide variety of contractor information including contractor name, address, points of contact, electronic payment information, and tax identification number (TIN). As of June 2003, the CCR database contained almost 224,000 active contractor registrations. DOD; NASA; the Departments of the Treasury, Transportation, and the Interior; as well as the Office of Personnel Management currently use CCR to register contractors. According to CCR officials, while some contractors engage in business with more than one agency (e.g., DOD and NASA), prospective and current DOD contractors represented the majority of CCR registrations. On October 1, 2003, a final rule change to the Federal Acquisition Regulation (FAR) was announced that generally requires all federal contractors to register in the CCR database. Unlike most federal agencies that rely on the Department of the Treasury’s Financial Management Service (FMS) for issuing payments, DOD has its own disbursing authority. The Defense Finance and Accounting Service (DFAS) has overall payment responsibility for goods and services purchased by DOD. As part of a reorganization in April 2001, DFAS separated its commercial payment services into two areas—contract pay and vendor pay. Contract pay handles invoices for formal, long-term contracts that are typically administered by the Defense Contract Management Agency (DCMA). These contracts tend to cover complex, multiyear purchases with high-dollar values, such as major weapon systems. The single DOD automated system used in contract pay disbursed over $86 billion to contractors in fiscal year 2002. While somewhat of a misnomer, vendor pay is handled by 15 DOD payment and disbursing systems operating in 22 DFAS offices, and cumulatively disbursed another $97 billion to contractors during fiscal year 2002. Overhauling DOD’s financial management represents a major challenge that goes far beyond financial accounting to the very fiber of the department’s range of business operations and management culture. Of the 26 areas on our governmentwide “high-risk” list, 6 are DOD program areas, and the department shares responsibility for 3 other high-risk areas that are governmentwide in scope. Financial management, one of the 6 DOD program areas, has weaknesses, including the lack of effective and efficient asset management and accountability, unreliable estimates of environmental and disposal liabilities, lack of accurate budget and cost information, nonintegrated and proliferating financial management systems, and fundamental flaws in the overall control environment. As we have documented in numerous reports, DOD’s financial management problems leave it highly vulnerable to fraud, waste, and abuse. In our high-risk list, IRS also shares responsibility for three areas that are governmentwide in scope, as well as two IRS program areas pertinent to this report: IRS financial management and collection of unpaid taxes. In both of these areas, weaknesses continue to expose the federal government to significant losses of tax revenue, and compliant taxpayers bear the increased burden of financing the government’s activities. IRS attempts to identify businesses and individuals that do not pay the taxes they owe through its various enforcement programs. However, inadequate financial and operational information has rendered IRS unable to develop reliable cost-based performance information for its tax collection and enforcement programs, and to judge whether the agency is appropriately allocating available resources among competing management priorities. As of September 2002, IRS had an inventory of known unpaid taxes, including interest and penalties, totaling $249 billion, of which $112 billion has some collection potential and thus is at risk. Our recent testimonies and reports have highlighted large and pervasive declines in IRS compliance and collection programs. These programs generally experienced larger workloads, smaller staffing, and fewer numbers of cases closed per employee from 1996 through 2001. By the end of fiscal year 2001, IRS was deferring collection action for about one of three tax delinquencies assigned to the collection programs. In a September 2002 report to the IRS Oversight Board, former IRS Commissioner Rossotti said that IRS has been facing a growing compliance workload at the same time that resources were declining. He said the result is a "huge gap" between the number of taxpayers that are not filing, not reporting, or not paying what they owe and IRS’s capacity to deal with them. In addition, we reported in 1999 that nearly 2 million businesses owed about $49 billion in payroll taxes, which was about 22 percent of the total outstanding balance of IRS unpaid tax assessments. As of September 30, 2002, the amount of unpaid payroll taxes remained about the same (nearly $49 billion). In our 1999 report, we noted that according to IRS records, IRS had assessed $15 billion in penalties against approximately 185,000 individuals found to be willful and responsible for the nonpayment of payroll taxes withheld from employees. We reported that much of this amount was not being collected, and that businesses and individuals owing payroll taxes received significant federal benefits and other federal payments. The Taxpayer Relief Act of 1997 enhanced IRS’s ability to collect unpaid federal taxes by authorizing IRS to continuously levy up to 15 percent of certain federal payments made to businesses and individuals. The continuous levy program, now referred to as FPLP, was implemented in July 2000. This program provides an automated process for serving tax levies and collecting unpaid taxes through Treasury’s FMS and its TOP process. Treasury established the TOP as part of implementing the DCIA. Congress passed DCIA to maximize the collection of delinquent nontax debts owed to federal agencies. TOP centralizes the process by which certain federal payments are withheld or reduced to collect delinquent debts, and as part of that program, FMS has a centralized database of debts that DCIA requires federal agencies to refer to FMS. Under the regulations implementing DCIA, disbursing agencies, including DOD and others that independently disburse rather than having it done on their behalf by FMS, are required to compare their payment records with the TOP database. If a match occurs, the disbursing agency must offset the payment, thereby reducing or eliminating the nontax debt. FMS assists IRS in implementing FPLP through a feature of the TOP process, thus enabling IRS to electronically serve a tax levy. For payments disbursed by FMS on behalf of most federal agencies, the amount to be levied and credited to IRS is deducted before FMS disburses the payment. For payments disbursed directly by other federal agencies, such as DOD, FMS identifies the amount to be levied from the disbursing agency’s payment information and notifies the disbursing agency to deduct the levy amount before payment is made. As a practical matter, FMS cannot honor a tax levy through TOP unless the disbursing agency has fulfilled its DCIA responsibilities to compare payment records with the TOP database. When a disbursing agency provides FMS with payment information for comparison with the TOP database, FMS has an opportunity to notify the disbursing agency of an IRS levy. To the extent disbursing agencies are not providing payment information to TOP, the implementation of FPLP is hindered. DCIA also requires agencies to refer certain debt to Treasury for centralized collection. FMS reported that the debt referrals to TOP totaled more than $186 billion as of September 2002. Of this amount, $81 billion were federal tax debt, $71 billion were child support debt, $3 billion were state tax debt, and $31 billion were federal nontax debt (e.g., student loans). Under the levy process, IRS supplies FMS with an electronic file containing unpaid tax information for inclusion in the TOP database. FMS compares the TIN and name on federal payment records with the TIN and name on unpaid tax records provided by IRS. When FMS identifies a business or individual with unpaid taxes that is scheduled to receive a federal payment, it informs IRS, which issues a notice of intent to levy to the delinquent taxpayer (unless the notice was previously sent). Once a notice of impending levy is received, the delinquent taxpayer has several options for action and a minimum of 30 days to respond. The options are as follows: The taxpayer may disagree with IRS’s assessment and collection of tax liability, and appeal the action by requesting a hearing with the IRS Office of Appeals. Generally, IRS must suspend any levy actions while the hearing and related appeals are pending. The taxpayer may elect to pay the debt in full. The taxpayer may negotiate with IRS to establish an alternative payment arrangement, such as an installment agreement or an offer in compromise. IRS is precluded from continuing with a levy action while it considers a taxpayer’s proposed installment agreement or offer in compromise. The taxpayer may apply to IRS for a hardship determination, for which a business or individual demonstrates to IRS that making any payment would result in a significant financial hardship. In such cases, IRS may agree to delay collection action until the taxpayer’s financial condition improves. If the delinquent taxpayer does not respond to the levy notice, IRS will instruct FMS to proceed with the continuous levy and reduce all scheduled payments by up to 15 percent, or the exact amount of tax owed if it is less than 15 percent of the payment, until the tax debt is satisfied. Since the inception of the levy program in July 2000, IRS has used it to collect $76 million in tax debt, including over $60 million in tax debt during fiscal year 2002, by directly levying federal payments. In earlier reviews, we estimated that IRS could use the levy program to potentially recover hundreds of millions of dollars in tax debt. The federal government pays billions of dollars to DOD contractors that abuse the federal tax system. Further, as of September 2002, businesses and individuals registered in DOD’s CCR database owed nearly $3 billion in unpaid federal taxes. Data reliability issues with respect to DOD and IRS records prevented us from identifying an exact amount. Consequently, the total amount of unpaid federal taxes owed by DOD contractors is not known. DOD and IRS records showed that the nearly $3 billion in unpaid federal taxes is owed by about 27,100 contractors registered in CCR. This represents almost 14 percent of the contractors registered as of February 2003. Of this number, over 25,600 were businesses that primarily had unpaid payroll taxes. Many also had unpaid federal unemployment taxes. The other approximately 1,500 contractors were primarily individuals who did not pay income taxes on their business profits or individual income. The amount of unpaid taxes for DOD contractors registered in CCR ranged from a small amount owed by an individual for a single tax period to millions of dollars owed by a business over more than 60 tax periods. The type of unpaid taxes owed by these contractors varied and consisted of payroll, corporate income, excise, unemployment, individual income, and other types of taxes. In the case of unpaid payroll taxes, an employer withheld federal taxes from an employee’s wages, but did not send the withheld payroll taxes or the employer’s required matching amount to IRS. As shown in figure 2, about 42 percent of the total tax amount owed by DOD contractors was for unpaid payroll taxes. Employers are subject to civil and criminal penalties if they do not remit payroll taxes to the federal government. When an employer withholds taxes from an employee’s wages, the employer is deemed to have a responsibility to hold these amounts “in trust” for the federal government until the employer makes a federal tax deposit in that amount. To the extent these withheld amounts are not forwarded to the federal government, the employer is liable for these amounts, as well as the employer’s matching Federal Insurance Contribution Act (FICA) contributions. Individuals within the business (e.g., corporate officers) may be held personally liable for the withheld amounts not forwarded and assessed a civil monetary penalty known as a trust fund recovery penalty (TFRP). Failure to remit payroll taxes can also be a criminal felony offense punishable by imprisonment of more than a year, while the failure to properly segregate payroll taxes can be a criminal misdemeanor offense punishable by imprisonment of up to a year. The law imposes no penalties upon an employee for the employer’s failure to remit payroll taxes since the employer is responsible for submitting the amounts withheld. The Social Security and Medicare trust funds are subsidized or made whole for unpaid payroll taxes by the general fund, as we discussed in a previous report. Over time, the amount of this subsidy is significant. As of September 1998, the last date on which information was readily available, the estimated cumulative amount of unpaid taxes and associated interest for which the Social Security and Medicare trust funds were subsidized by the general fund was approximately $38 billion. Based on our case study analysis, we found that contractors with unpaid federal taxes provide a wide range of goods and services to DOD, including building maintenance, catering, construction, consulting, custodial, dentistry, music, and funeral services. Several of these contractors provided parts or services related to aircraft components for several DOD and civilian programs. A substantial amount of the unpaid federal taxes shown in IRS records as owed by DOD contractors had been outstanding for several years. As reflected in figure 3, 78 percent of the nearly $3 billion in unpaid taxes was over a year old as of September 30, 2002, and 52 percent of the unpaid taxes was for tax periods prior to September 30, 1999. Our previous work has shown that as unpaid taxes age, the likelihood of collecting all or a portion of the amount owed decreases. This is due, in part, to the continued accrual of interest and penalties on the outstanding tax debt, which, over time, can dwarf the original tax obligation. Although the nearly $3 billion in unpaid federal taxes owed by DOD contractors as of September 30, 2002, is a significant amount, it may not reflect the true amount of unpaid taxes owed by these businesses and individuals. Data integrity issues with DOD’s contractor database and the nature of IRS’s taxpayer account database prevented us from identifying the true extent of DOD contractor unpaid taxes. For example, we found that some contractors providing goods and services to DOD could not be identified. We analyzed the TINs reported by contractors in the CCR database. A TIN field is completed during a CCR registration, and contractors are responsible for the TIN’s accuracy. During our review, we found that the CCR database included nearly 4,900 employer identification numbers (EIN) that did not match the IRS Master Files. Our examination also identified some invalid TINs that were either all the same digit (e.g., 999999999) or an unusual series of digits (e.g., 123456789). Invalid TINs in the CCR database prevented us from determining if the contractor had unpaid taxes. We recently recommended to IRS and OMB that options to routinely validate all TINs in the CCR be considered, and use of contractor and TIN information from CCR be required for tax reporting by all federal agencies. As previously mentioned, some contractors that received DOD payments were not registered in CCR. Our analysis of fiscal year 2002 disbursements totaling almost $20 billion through one DFAS vendor payment system identified payments totaling about $1 billion with a TIN that did not match a contractor TIN in the CCR database. We also identified contractor payments totaling over $4 billion that lacked TINs in the same DFAS system. Missing TINs in the DOD payment record prevented us from determining if the payees were contractors with unpaid taxes. DOD financial management regulations require that after reasonable efforts to obtain the TIN have been unsuccessful, federal income tax at 31 percent should be withheld and the balance of the payment forwarded to the payee. Another factor that contributes to understating the amount of unpaid federal taxes owed by DOD contractors is that the IRS taxpayer account database reflects only the amount of unpaid taxes either reported by the taxpayer on a tax return or assessed by IRS through its various enforcement programs. The IRS database does not reflect amounts owed by businesses and individuals that have not filed tax returns and for which IRS has not assessed tax amounts due. During our review, we identified instances in which a DOD contractor failed to file tax returns for a particular tax period and, therefore, was listed in IRS records as having no unpaid taxes. Consequently, the true extent of unpaid taxes for these businesses and individuals is not known. It is important to note that timing issues could result in some DOD contractors that we identified with unpaid taxes having already paid the amounts due. For example, some very recent amounts that appear as unpaid taxes through a matching of DOD and IRS records may involve matters that are routinely resolved between the taxpayer and IRS, with the taxes paid, abated, or both within a short period. Also, it should be noted that some assessments developed by IRS through third party information may be overstated due to a lack of taxpayer information (e.g., deductions). Similarly, as we have previously reported, IRS records contain errors that affect the accuracy of taxpayer account information, and lead to both lost opportunities to collect outstanding taxes and a burden on taxpayers because IRS continues to pursue amounts from taxpayers that are no longer owed. Consequently, some of the nearly $3 billion may not reflect true unpaid taxes, although we cannot quantify this amount. Nonetheless, we believe the nearly $3 billion represents a reasonable yet conservative estimate of unpaid federal taxes owed by DOD contractors. We estimate that DOD, which functions as its own disbursing agent, could have levied payments and collected at least $100 million in unpaid taxes in fiscal year 2002 if it and IRS had worked together to effectively levy contractor payments. However, in the 6 years since the passage of the Taxpayer Relief Act of 1997, DOD has collected only about $687,000. DOD collections to date relate to DFAS payment reporting associated with implementation of the TOP process in December 2002 for its Mechanization of Contract Administration Services (MOCAS) contract payment system, which disbursed over $86 billion to DOD contractors in fiscal year 2002. DFAS had no plans or schedule at the completion of our review to report payment information to TOP for any of its 15 vendor payment systems, which disbursed another $97 billion to DOD contractors in fiscal year 2002. IRS’s continuing challenges in pursuing and collecting unpaid taxes also hinder the government’s ability to take full advantage of the levy program. For example, due to resource constraints, IRS has established policies that either exclude or delay referral of a significant number of cases to the program. The IRS review process for taxpayer requests, such as installment agreements or certain offers in compromise, which IRS is legally required to consider, often takes many months, during which time IRS excludes these cases from the levy program. In addition, inaccurate or outdated information in IRS systems prevents cases from entering the levy program. Our audit and investigation of 47 DOD contractor case studies, discussed in detail later in this report, also show IRS continuing to work with businesses and individuals to achieve voluntary compliance and taking enforcement actions, such as levies of federal contractor payments, later in the collection process. From a governmentwide perspective, making payments to federal contractors without requiring the businesses or individuals to meet their tax obligations through methods such as levying payments to collect unpaid taxes is not a sound business practice. Until DOD begins to fulfill its responsibilities under DCIA by fully assisting IRS in its attempts to levy contractor payments and IRS fully utilizes its authority under the Taxpayer Relief Act of 1997, the federal government will continue to miss opportunities to collect on hundreds of millions of dollars in unpaid federal taxes owed by DOD contractors. Although it has been more than 7 years since the passage of DCIA, DOD has not fully assisted IRS in using its continuous levy authority for the collection of unpaid taxes by providing FMS with all DFAS payment information. IRS’s continuous levy authority authorizes the agency to collect federal tax debts of businesses and individuals that receive federal payments by levying up to 15 percent of each payment until the debt is paid. Under TOP, FMS matches a database of debtors (including those with federal tax debt) to certain federal payments (including payments to DOD contractors). When a match occurs, the payment is intercepted, the levied amount is sent to IRS, and the balance of the payment is sent to the debtor. The TOP database includes federal tax and nontax debt, state tax debt, and child support debt. All disbursing agencies are to compare their payment records with the TOP database. Since DOD has its own disbursing authority, once DFAS is notified by FMS of the amount to be levied, it should deduct this amount from the contractor payment before it is made to the payee and forward the levied amount to the Department of the Treasury. By fully participating in the TOP process, DOD will also aid in the collection of other debts, such as child support and federal nontax debt (e.g., student loans). At the completion of our work, DOD had no formal plans or schedule to begin providing payment information from any of its 15 vendor payment systems to FMS for comparison with the TOP database. These 15 payment systems disbursed almost $97 billion to DOD contractors in fiscal year 2002. DFAS officials contend that it would be difficult to provide this payment information to TOP because the systems are decentralized and nonintegrated in 22 different payment locations. As we have previously reported, DOD’s business systems environment is stovepiped and not well integrated. DOD recently reported that its current business operations were supported by approximately 2,300 systems in operation or under development, and requested approximately $18 billion in fiscal year 2003 for the operation, maintenance, and modernization of its business systems. In addition, DFAS did not have an organizational structure in place to implement the TOP payment reporting process. DOD recently communicated a timetable for implementing TOP reporting for its vendor payment systems with completion targeted for March 2005. Until DOD establishes processes to provide information from all payment systems to TOP, the federal government will continue missing opportunities to collect hundreds of millions of dollars in unpaid taxes owed by DOD contractors. Although DFAS recently began providing payment information to TOP from its largest payment system, total collections to date have been minimal. In December 2002, DFAS began providing FMS with payment information for its MOCAS contract payment system, which disbursed over $86 billion to contractors in fiscal year 2002. According to IRS, from December 2002 through September 2003, DOD collected about $687,000 in unpaid taxes from contractor payments. However, our analysis of IRS records for DOD contractors receiving fiscal year 2002 payments from MOCAS showed that these contractors owed about $750 million in unpaid federal taxes as of September 30, 2002. As mentioned previously, IRS records showed that over 27,100 contractors in DOD’s CCR database owed nearly $3 billion in unpaid federal taxes as of September 30, 2002. We reviewed payment transactions in five of the largest DOD disbursement systems covering about 72 percent of the fiscal year 2002 disbursements, or almost $131 billion, from DFAS contract and vendor payment systems. Contractors paid through these five DOD automated systems represented at least $1.7 billion of the nearly $3 billion in unpaid federal taxes shown on IRS records. We estimate that DOD could have offset contractor payments to collect at least $100 million of this amount in fiscal year 2002 if DOD had been fulfilling its responsibilities under DCIA to compare its payment records with the TOP database. Although the levy program could provide a highly effective and efficient method of collecting unpaid taxes from contractors that receive federal payments, IRS policies restrict the number of cases that enter the program and the point in the collection process they enter the program. For each of the collection phases listed below, IRS policy either excludes or severely delays putting cases into the levy program. Phase 1: Notify taxpayer of unpaid taxes, including a demand for payment letter. Phase 2: Place the case into the Automated Collection System (ACS) process. The ACS process consists primarily of telephone calls to the taxpayer to arrange for payment. Phase 3: Move the case into a queue of cases awaiting assignment to a field collection revenue officer. Phase 4: Assign the case to field collections where a revenue officer attempts face-to-face contact and collection. As of September 30, 2002, IRS listed $81 billion of cases in these four phases: 17 percent were in notice status, 17 percent were in ACS, 26 percent were in field collection, and 40 percent were in the queue awaiting assignment to the field. At the same time these four phases take place, sometimes over the course of years, DOD contractors with unpaid taxes continue to receive billions of dollars in contract payments. IRS excludes cases in the notification phase from the levy program to ensure proper notification rules are followed. However, as we previously reported, once proper notification has been completed, IRS continues to delay or exclude from the levy program those accounts placed in the other three phases. IRS policy is to exclude accounts in the ACS phase primarily because officials believed they lack the resources to issue levy notices and respond to the potential increase in telephone calls from taxpayers responding to the notices. Additionally, IRS excludes the majority of cases in the queue phase (awaiting assignment to field collection) from the levy program for 1 year. Only after cases await assignment for over a year does IRS allow them to enter the levy program. Finally, IRS excludes most accounts from the levy program once they are assigned to field collection because revenue officers said that the levy action could interfere with their successfully contacting taxpayers and resolving the unpaid taxes. These policy decisions, which may be justified in some cases, result in IRS excluding millions of cases from potential levy. IRS officials who work on ACS and field collection inventories can manually unblock individual cases they are working in order to put them in the levy program. However, by excluding cases in the ACS and field collection phases, IRS records indicate it excluded as much as $34 billion of cases from the levy program as of September 30, 2002. In January 2003, IRS unblocked and made available for levy those accounts identified as receiving federal salary or annuity payments. However, other accounts remain blocked from the levy program. IRS stated that it intended to unblock a portion of the remaining accounts sometime in 2005. Additionally, $32 billion of cases are in the queue, and thus under existing policy, would be excluded from the levy program for the first year each case is in that phase. IRS policies along with its inability to more actively pursue collections, both of which IRS has in the past attributed to resource constraints, combine to prevent many cases from entering the levy program. Since IRS has a statutory limitation on the length of time it can pursue unpaid taxes, generally 10 years from the date of the assessment, these long delays greatly decrease the potential for IRS to collect the unpaid taxes. We identified specific examples of IRS not actively pursuing collection in our audit and investigation of 47 selected cases involving DOD contractors. For example, IRS used a special code within its automated systems to block collection action for almost 10 months for one DOD contractor that owed nearly $260,000 in unpaid taxes. Specifically, IRS closed collection actions against this case (using an administrative transaction code it refers to as 530-39) citing resource and workload management considerations. IRS is not currently seeking collection of about $14.9 billion of unpaid taxes because of this administrative code—about 5 percent of its overall inventory of unpaid assessments as of September 30, 2002. Once IRS reversed the special code, it placed the contractor into its queue of cases awaiting assignment for collection action. The contractor remained in the queue, awaiting assignment, from October 2001 through the time of our review in May 2003—19 months. DOD paid this contractor over $110,000 in fiscal year 2002, missing opportunities to collect as much as $17,000 through the 15 percent levy. For another DOD contractor, IRS coded the individual within its automated systems in 1999 as having financial hardship and therefore unable to pay. This code put collection activities on hold until the individual’s adjusted gross income (per subsequent tax return filings) exceeded a certain threshold. At the same time, IRS entered a code to prevent further collection actions because of its own resource constraints. IRS automated systems are designed to automatically reverse the financial hardship code when the adjusted gross income exceeds a certain threshold. That reversal would put the contractor back into the IRS collection system. However, before that occurred, the contractor stopped filing tax returns in 1997 and the IRS resource constraint code had the unintended effect of IRS not attempting to obtain the unfiled tax returns. This combination of codes effectively stopped collection action from taking place for this contractor and created a catch–22 situation since one code prevents IRS from pursuing the individual until a filed tax return reports higher income and the other code prevents IRS from pursuing the individual to obtain non- filed tax forms. DOD paid this individual nearly $220,000 in 2002 and almost $700,000 since 1999. If an effective 15 percent levy had been in place, the government could have collected over $30,000 of the unpaid taxes in 2002. Because of the individual’s failure to file, the true amount of unpaid taxes is not known, but could be significantly greater than the over $160,000 currently reflected in IRS records. Some cases repeatedly enter the queue awaiting assignment to a field collection revenue officer and remain there for long periods. For example, one DOD contractor had gone between ACS and the queue awaiting assignment since 1998. This individual’s case entered the queue three times but was never assigned. As of May 2003, this case spent almost 3 and a half years in the queue. Moving a case in and out of the queue affects its eligibility for the levy program. For another contractor involving over $100,000 in unpaid taxes, IRS put the case into ACS in July 2000. As noted previously, IRS routinely blocks ACS cases from entering the levy program. Nine months later, in April 2001, IRS moved this case from ACS into the queue to await assignment to a revenue officer. Again, in accordance with IRS policy, IRS excludes cases in the queue from entering the levy program for 1 year. After 1 year, the case was referred to the levy program, so this case took about 21 months from the time it initially went to ACS until it was moved into the levy program. The contractor received over $350,000 in federal payments from 1999 to 2002, and current payments would not be subject to the 15 percent levy because DOD is not reporting information from the vendor payment system to TOP. In addition to excluding cases for various operational and policy reasons as described above, IRS excludes cases from the levy program for particular taxpayer events, such as bankruptcy, litigation, or financial hardship, as well as when taxpayers apply for an installment agreement or an offer in compromise. When one of these events takes place, IRS enters a code in its automated system that excludes the case from entering the levy program. Although these actions are appropriate, IRS may lose opportunities to collect through the levy program if the processing of agreements is not timely or prompt action is not taken to cancel the exclusion when the event, such as a dismissed bankruptcy petition, is concluded. Delays in processing taxpayer documents and errors in taxpayer records are long-standing problems at IRS and can harm both government interests and the taxpayer. In 2002, the IRS Taxpayer Advocate Service reported that over 65 percent of all offers in compromise take longer than 6 months to process. Similarly, in our audits of IRS financial statements, we reported on delays in processing offers in compromise. In those audits, we identified delays in processing that were outside IRS’s control (such as taxpayer failure to provide appropriate documentation to support the offer), as well as delays caused by IRS inactivity. These findings are consistent with an earlier IRS internal audit report that found, in a majority of cases sampled, that IRS had periods of inactivity that lasted 60 days or more. Similarly, past audits have identified instances in which inaccurate records allowed tax refunds to be released to citizens who owe taxes and other cases in which IRS erroneously assessed millions of dollars due to inaccurate records. Our audit of cases involving DOD contractors with unpaid federal taxes indicates that problems persist in the timeliness of processing taxpayer applications and in the accuracy of IRS records. In our review of DOD contractors with unpaid federal taxes, we identified a number of cases in which the processing of DOD contractor applications for an offer in compromise or an installment agreement was delayed for long periods, thus blocking the cases from the levy program and potentially reducing government collections. For example, in one case, a DOD contractor with nearly $400,000 in unpaid federal taxes applied for an offer in compromise in mid-1999, but IRS did not reject the offer until July 2000—over a year later. In this same case, the individual filed for an installment agreement in March 1999, but it took IRS over 2 years—until mid-2001—to reject the proposed agreement. During this period, the individual’s account was blocked from potential levying. From 1999 to 2001, DOD paid this individual over $200,000 in contract payments. Had DOD been reporting its payments to TOP during this period and had IRS not blocked the account for a potential levy, a 15 percent levy of these payments could have generated over $30,000 in collections for the government. In another example, there was both a long delay by IRS in deciding whether to accept a DOD contractor’s proposed installment agreement as well as a failure to properly reverse the codes once a decision was made. The case had a levy block due to a proposed installment agreement submitted by the business in mid-2000. As mentioned above, under IRS regulations, once a code is entered into the system indicating that a taxpayer has applied for or is currently under an offer in compromise or installment agreement, the case is automatically blocked from the levy program. IRS rejected the installment agreement offer after a year. However, IRS had not properly reversed the code in its systems that indicated an installment agreement application was pending, as of our review in May 2003. Consequently, this account with over $60,000 in unpaid taxes was inappropriately excluded from the levy program for 2 years. Meanwhile, this business received nearly $30,000 in payments from DOD while the statutory period in which IRS had to collect the unpaid taxes continued to run. We found that inaccurate coding at times prevented both IRS collection action and cases from entering the levy program. Because the coding within a taxpayer’s account determines whether the account will enter the levy program, effective management of these codes is critical. If these blocking codes remain in the system for long periods, either because IRS delays processing taxpayer agreements or because IRS fails to input or reverse codes after processing is complete, cases may be needlessly excluded from the levy program. For example, as of May 2003, one DOD contractor had been assigned to field collection since the spring of 1996. However, the case entered bankruptcy, thus blocking it from the levy program and preventing all collection action on the case. Although the bankruptcy was settled in 1998, the case was never released for collection action. IRS had incorrectly entered a reversal code, causing the case to remain in bankruptcy status and therefore blocking it from the levy program. On the basis of our review, IRS was attempting to reverse the bankruptcy code and begin collection action against the case. Similarly, in another case, a DOD contractor entered into an installment agreement with IRS in the spring of 1999, at which time IRS posted the appropriate code to block other collection activities. The individual defaulted on the agreement, after making three payments, in 1999. However, IRS did not post the code required to cancel the installment agreement, leaving the individual’s account blocked from collection activities, such as the levy program. If the correct code had been posted, IRS systems would have automatically put the individual in the levy program in late 2000 when IRS implemented the program. Although the nation’s tax system is built upon voluntary compliance, when businesses and individuals fail to pay voluntarily, the government has a number of enforcement tools to compel compliance or elicit payment. Our review of DOD contractors with unpaid federal taxes indicates that although the levy program could be an effective, reliable collection tool, IRS is not using the program as a primary tool for collecting unpaid taxes from federal contractors. For the cases we audited, IRS subordinated the use of the levy program in favor of negotiating voluntary tax compliance with the business or individual. We recently recommended that IRS study the feasibility of submitting all eligible unpaid federal tax accounts to FMS on an ongoing basis for matching against federal payment records under the levy program, and use information from any matches to assist IRS in determining the most efficient method of collecting unpaid taxes, including whether to use the levy program. Although IRS raised concerns that increasing the use of the levy program would increase workload for its staff and would entail excessively high computer programming costs, it agreed to study the feasibility of such an arrangement. The study was not completed at the time of our review. For the DOD contractors we audited and investigated, IRS attempts to gain voluntary compliance often resulted in minimal or no actual collections. For example, one case involved a sole proprietorship that had gross revenue of over $40 million in 2001, about 10 percent of which came from DOD contract payments. Although this business worked primarily for federal agencies, it failed to remit payroll and unemployment taxes and had accumulated unpaid federal taxes of nearly $10 million. Even with the mounting tax debt, revenue officers continued working to get the business to make payments, including executing an installment agreement, on which the business defaulted. After defaulting, IRS did not put the case into the levy program. In November 2002, the revenue officer put a 1-year collection hold on the business to see if it could restructure, cut costs, and become profitable so that it could enter into another installment agreement to voluntarily pay the tax debt. Throughout this period, the business rarely paid its taxes on time or in full (essentially additional payroll taxes), yet the business continued to operate and increase the amount of unpaid federal taxes owed. In this case, IRS did not levy the business’s assets because it thought a levy would cause the business to fail. However, the state in which the business operated seized funds from the business’s bank account in early 2003 to partially settle the business’s state tax debt. This caused the business to cease operations in early 2003, leaving IRS with a potentially uncollectible debt of nearly $10 million. As another example, shortly after one business in our selection of DOD contractors defaulted on an installment agreement, it requested and received another installment agreement. The business promised to make current tax payments. However, after only a few months the business was not paying its current tax liabilities (essentially additional payroll taxes) and had fallen behind on the installment agreement. Even without the business accumulating more debt, the installment agreement required the business to make monthly payments for 13 years. Given the business’s history of default, failure to pay its current tax debt, and default on the current agreement, indications were the business would not fulfill this obligation. However, instead of canceling this long-term payment plan and preventing the business from accumulating additional debt due to its failure to remit current quarterly payroll taxes, IRS reinstated the installment agreement and declined to put a lien on the business’s properties. The business again defaulted on the installment agreement less than 2 months after initiation, and at the time of our review, IRS was negotiating with the business for yet another installment agreement. The nation’s tax system is rooted in the doctrine of its citizens voluntarily complying with the tax laws. IRS has a difficult task in maintaining a balance between this key doctrine and effectively fulfilling its role as the nation’s tax collector. The philosophical thrust of this doctrine can, however, negatively affect IRS’s ability to collect what is legitimately owed to the government. If IRS fails or is limited in its ability to act quickly and aggressively against businesses and individuals that repeatedly fail to pay the taxes they owe, it runs the risk of not fulfilling its mission. IRS also risks further weakening voluntary compliance as declines in enforcement programs may erode taxpayer confidence in the fairness of our federal tax system and may create the perception that there is little risk in noncompliance. The potential revenue losses and the threat to voluntary compliance make the collection of unpaid taxes a high-risk area. Congress and others have been concerned that declines in IRS enforcement programs are eroding taxpayer confidence in the fairness of our tax system. Prompt collection is important because, as discussed earlier, IRS generally has a finite period under which to seek collection for unpaid taxes. Generally, there is a 10-year statutory collection period beyond which IRS is prohibited from attempting to collect. Unless the collection period is extended, IRS removes unpaid taxes that exceed this statutory period from its records. Even if a case is not actively worked for extended periods, the collection period continues to move toward expiration, reducing IRS’s opportunity to collect the amount due. The levy program could help IRS take prompt enforcement action and operate more efficiently. In addition, from a governmentwide perspective, paying billions of dollars to DOD contractors that at the same time have substantial unpaid taxes is not a sound business practice. Withholding up to 15 percent of these payments is an effective collection method and is authorized by law. Additionally, the levy program can assist other collection activities. For example, in one case the levy helped IRS collect against a DOD contractor it was unable to locate. The IRS revenue officers tried without success for 5 years to contact this business owner. However, after placing a lien on the owner’s assets and putting the case into FPLP, which began to levy payments from the business’s contract with another federal agency, the contractor was ready to cooperate with IRS. As the above case indicates, the levy program can have a far greater impact on the tax program than just the dollars levied. We reported in the past that businesses and individuals are more likely to pay voluntarily when faced with a notice of intent to levy. Our audit of DOD contractors also found this to be true. For example, IRS issued a levy notice to one DOD contractor in the spring of 2003. After complaining that the levy would force it into bankruptcy, the contractor agreed to begin making voluntary installment payments. IRS accepted this offer and therefore did not levy. At the time of our review in May 2003, IRS had received two payments from the contractor to begin paying the liability from its earliest tax period. In addition, the business paid two tax deposits for current (2003) periods of over $160,000. This sequence of events indicates that, as we reported previously, the threat of IRS levy action often brings about tax payments and greater taxpayer compliance and fairness to those that do pay their taxes. In a previous report, we estimated that after receiving a notice of intent to levy, about 29 percent of taxpayers take action that enables IRS to remove them from the active inventory of unpaid taxes or move them to an inactive status. Specifically, we estimated that subsequent to receiving a levy notice, about 19 percent of the taxpayers resolved their liability and were removed from the active inventory, while about 10 percent obtained determinations of financial hardship. By reclassifying some active accounts to an inactive status and removing others, the levy program helps IRS prioritize its inventory of unpaid taxes more efficiently and enables IRS to focus more of its resources on unpaid accounts that have more collection potential. As described above, the advantages of the levy program to IRS in assisting its collection efforts are clear given its claims of resource constraints. However, IRS’s current implementation strategy appears to make the levy program one of the last collection tools IRS uses. Changing the program to (1) remove the policies that work to unnecessarily exclude cases from entering the levy program and (2) promote the use of the levy program to make it one of the first collection tools could allow IRS—and the government—to reap the advantages of the program earlier in the collection process. To determine whether there are instances of abusive or potentially criminal activity by DOD contractors related to the federal tax system, we selected 47 case study businesses and individuals that had unpaid taxes and were receiving DOD contractor payments in fiscal year 2002. We excluded cases that IRS categorized as “compliance assessment,” business cases with total unpaid taxes under $10,000, and individual cases with total unpaid taxes under $5,000. Our selection was based upon a business or individual having a large number of unpaid tax periods, owing large tax debt, and receiving DOD contractor payments. For more information on our criteria for the selection of the 47 case studies, see appendix I. For all 47 cases that we audited and investigated, we found abusive or potentially criminal activity related to the federal tax system. Thirty-four of these case studies involved businesses with employees who had unpaid payroll taxes dating as far back as the early 1990s, some for as many as 62 tax periods. However, rather than fulfill their role as “trustees” of this money and forward it to IRS, these DOD contractors diverted the money for other purposes. To reiterate, the diversion of payroll taxes for personal or business use is potentially criminal activity. The other 13 case studies involved individuals that had unpaid income taxes dating as far back as the 1980s. We are referring the 47 cases detailed in this report to IRS for evaluation and additional collection action or criminal investigation. DOD is a large and complex organization with a budget of about $400 billion and operations across the world. Because DOD contracts for a large variety of goods and services, it is not surprising that we found DOD contractors that have unpaid taxes from a large number of industries. Table 1 shows a breakdown for our 47 contractor case studies by the type of goods and services provided to DOD. As discussed previously, businesses with employees are required by law to collect, account for, and transfer income and employment taxes to IRS, which the employer withholds from an employee’s wages. IRS refers to these withheld payroll taxes as trust fund taxes because the employer holds the employee’s money “in trust” until the employer makes a federal tax deposit in that amount. Businesses that fail to remit payroll taxes to the federal government are liable for the amounts withheld from employees, and IRS can assess a TFRP equal to the total amount of taxes not collected or not accounted for and paid over against individuals who are determined by IRS to be “willful and responsible” for the nonpayment of withheld payroll taxes. Typically, these individuals are the officers of a corporation, such as a president or treasurer. As we have found in previous reviews, collections of TFRP assessments from officers are generally minimal. In addition to civil penalties, criminal penalties exist for an employer’s failure to turn over withheld employee payroll taxes to IRS. The act of willfully failing to collect or pay over any tax is a felony. Additionally, the failure to comply with certain requirements for the separate accounting and deposit of withheld income and employment taxes is a misdemeanor. Our audit and investigation of the 34 case study business contractors showed substantial abuse or potential criminal activity as all had unpaid payroll taxes and all diverted funds for personal or business use. In table 2, and on the following pages, we highlight 13 of these businesses and estimate the amounts that could have been collected through the levy program based on fiscal year 2002 DOD payments. For these 13 cases, the businesses owed unpaid taxes for a range of 6 to 30 quarters (tax periods). Eleven of these cases involved businesses that had unpaid taxes in excess of 10 tax periods, and 5 of these were in excess of 20 tax periods. The amount of unpaid taxes associated with these 13 cases ranged from about $150,000 to nearly $10 million; 7 businesses owed in excess of $1 million. In these 13 cases, we saw some cases where IRS filed tax liens on property and bank accounts of the businesses, and a few cases where IRS collected minor amounts through the levying of non-DOD federal payments. We also saw 1 case in which the business applied for an offer in compromise, which IRS rejected on the grounds that the business had the financial resources to pay the outstanding taxes in their entirety, and 2 cases in which the business is entered into, and subsequently defaulted on, installment agreements to pay the outstanding taxes. In 5 of the 13 cases, IRS assessed the owners or business officers with TFRPs, yet no collections were received from these penalty assessments. The following provides illustrative detailed information on several of these cases. Case # 1 - This base support contractor provided services such as trash removal, building cleaning, and security at U.S. military bases. The business had revenues of over $40 million in 1 year, with over 25 percent of this coming from federal agencies. This business’s outstanding tax obligations consisted of unpaid payroll taxes. In addition, the contractor defaulted on an IRS installment agreement. IRS assessed a TFRP against the owner. The business reported that it paid the owner a six figure income and that the owner had borrowed nearly $1 million from the business. The business also made a down payment for the owner’s boat and bought several cars and a home outside the country. The owner allegedly has now relocated his cars and boat outside the United States. This contractor went out of business in 2003 after state tax authorities seized its bank account. The business transferred its employees to a relative’s business, which also had unpaid federal taxes, and submitted invoices and received payments from DOD on a previous contract through August 2003. Case # 2 - This engineering research contractor received nearly $400,000 from DOD during 2002. At the time of our review, the contractor had not remitted its payroll tax withholdings to the federal government since the late 1990s. In 1996, the owner bought a home and furnishings worth approximately $1 million and borrowed nearly $1 million from the business. The owner told our investigators that the payroll tax funds were used for other business purposes. Case # 3 - This aircraft parts manufacturer did not pay payroll withholding and unemployment taxes for 19 of 20 periods through the mid- to late 1990s. IRS assessed a TFRP against several corporate officers, and placed the business in FPLP in 2000. This business claims that its payroll taxes were not paid because the business had not received DOD contract payments; however, DOD records show that the business received over $300,000 from DOD during 2002. Case # 5 - This janitorial services contractor reported revenues of over $3 million and had received over $700,000 from DOD in a recent year. The tax problems of this business date back to the mid-1990s. At the time of our review, the business had both unpaid payroll and unemployment taxes of nearly $3 million. In addition, the business did not file its corporate tax returns for 8 years. IRS assessed a TFRP against the principal officer of the business in early 2002. This contractor employed two officers who had been previously assessed TFRPs related to another business. Case # 7 - This furniture business reported gross revenues of over $200,000 and was paid nearly $40,000 by DOD in a recent year. The business had accumulated unpaid federal taxes of over $100,000 at the time of our review, primarily from unpaid employee payroll taxes. The business also did not file tax returns for several years even after repeated notices from IRS. The owners made an offer to pay IRS a portion of the unpaid taxes through an offer in compromise, but IRS rejected the offer because it concluded that the business and its owners had the resources to pay the entire amount. At the time of our audit, IRS was considering assessing a TFRP against the owners to make them personally liable for the taxes the business owed. The owners used the business to pay their personal expenses, such as their home mortgage, utilities, and credit cards. The owners said they considered these payments a loan from the business. Under this arrangement, the owners were not reporting this company benefit as income so they were not paying income taxes, and the business was reporting inflated expenses. Case # 9 - This family-owned and operated building contractor provided a variety of products and services to DOD, and DOD provided a substantial portion of the contractor’s revenues. At the time of our review, the business had unpaid payroll taxes dating back several years. In addition to failing to remit the payroll taxes it withheld from employees, the business had a history of filing tax returns late, sometimes only after repeated IRS contact. Additionally, DOD made an overpayment to the contractor for tens of thousands of dollars. Subsequently, DOD paid the contractor over $2 million without offsetting the earlier overpayment. Case # 10 - This base support services contractor has close to $1 million in unpaid payroll and unemployment taxes dating back to the early 1990s, and the business has paid less than 50 percent of the taxes it owed. IRS assessed a TFRP against one of the corporate officers. This contractor received over $200,000 from DOD during 2002. Individuals are responsible for the payment of income taxes, and our audit and investigation of 13 individuals showed significant abuse of the federal tax system similar to what we found with our DOD business case studies. In table 3, and on the following pages, we highlight four of the individual case studies. In all four cases, the individuals had unpaid income taxes. In one of the four cases, the individual operated a business as a sole proprietorship with employees and had unpaid payroll taxes. Taxes owed by the individuals ranged from four to nine tax periods, which equated to years. Each individual owed in excess of $100,000 in unpaid income taxes, with one owing in excess of $200,000. In two of the four cases, the individuals had entered into, and subsequently defaulted on, at least one installment agreement to pay off the tax debt. The following provides illustrative detailed information on these four cases. Case # 14 - This individual’s business repaired and painted military vehicles. The owner failed to pay personal income taxes and did not send employee payroll tax withholdings to IRS. The owner owed over $500,000 in unpaid federal business and individual taxes. Additionally, the TOP database showed the owner had unpaid child support. IRS levied the owner’s bank accounts and placed liens against the owner’s real property and business assets. The business received over $100,000 in payments from DOD in a recent year, and the contractor’s current DOD contracts are valued at over $60 million. In addition, the business was investigated for paying employee wages in cash. Despite the large tax liability, the owner purchased a home valued at over $1 million and a luxury sports car. Case # 15 - This individual, who is an independent contractor and works as a dentist at a military installation, had a long history of not paying income taxes. The individual did not file several tax returns and did not pay taxes in other periods when a return was filed. The individual entered into an installment agreement with IRS but defaulted on the agreement. This individual received $78,000 from DOD during a recent year, and DOD recently increased the individual’s contract by over $80,000. Case # 16 - This individual is another independent contractor who also works as a dentist on a military installation. DOD paid this individual over $200,000 in recent years, and recently signed a multiyear contract worth over $400,000. At the time of our review, this individual had paid income taxes for only 1 year since the early 1990s and had accumulated unpaid taxes of several hundred thousand dollars. In addition, the individual’s prior business practice owes over $100,000 in payroll and unemployment taxes for multiple periods going back to the early 1990s. Case # 17 - DOD paid this individual nearly $90,000 for presenting motivational speeches on management and leadership. This individual has failed to file tax returns since the late 1990s and had unpaid income taxes for a 5-year period from the early to mid-1990s. The total amount of unpaid taxes owed by this individual is not known because of the individual’s failure to file income tax returns for a number of years. IRS placed this individual in the levy program in late 2000; however, DOD payments to this individual were not levied because DFAS payment information was not reported to TOP as required. See appendix II for details on the other 30 DOD contractor case studies. Federal law does not prohibit a contractor with unpaid federal taxes from receiving contracts from the federal government. Existing mechanisms for doing business only with responsible contractors do not prevent businesses and individuals that abuse the federal tax system from receiving contracts. Further, the government has no coordinated process for identifying and determining the businesses and individuals that should be prevented from receiving contracts and for conveying that information to contracting officers for use before awarding contracts. In previous work, we supported the concept of barring delinquent taxpayers from receiving federal contracts, loans and loan guarantees, and insurance. In March 1992, we testified on the difficulties involved in using tax compliance as a prerequisite for awarding federal contracts. In May 2000, we testified in support of H.R. 4181 (106th Congress), which would have amended DCIA to prohibit delinquent federal debtors, including delinquent taxpayers, from being eligible to contract with federal agencies. Safeguards in the bill would have enabled the federal government to procure goods or services it needed from delinquent taxpayers for designated disaster relief or national security. Our testimony also pointed out implementation issues, such as the need to first ensure that IRS systems provide timely and accurate data on the status of taxpayer accounts. However, this legislative proposal was not adopted and there is no existing statutory bar on delinquent taxpayers receiving federal contracts. Federal agencies are required by law to award contracts to responsible sources. This statutory requirement is implemented in the FAR, which requires that government purchases be made from, and government contracts awarded to, responsible contractors only. To effectuate this policy, the government has established a debarment and suspension process and established certain criteria for contracting officers to consider in determining a prospective contractor’s responsibility. Contractors debarred, suspended, or proposed for debarment are excluded from receiving contracts and agencies are prohibited from soliciting offers from, awarding contracts to, or consenting to subcontracts with these contractors, unless compelling reasons exist. Prior to award, contracting officers are required to check a governmentwide list of parties that have been debarred, suspended, or declared ineligible for government contracts, as well as to review a prospective contractor’s certification on debarment, suspension, and other responsibility matters. Among the causes for debarment and suspension is tax evasion. In determining whether a prospective contractor is responsible, contracting officers are also required to determine that the contractor meets several specified standards, including “a satisfactory record of integrity and business ethics.” Except for a brief period during 2000 through 2001, contracting officers have not been required to consider compliance with federal tax laws in making responsibility determinations. Neither the current debarment and suspension process nor the requirements for considering contractor responsibility effectively prevent the award of government contracts to businesses and individuals that abuse the tax system. Since most businesses and individuals with unpaid taxes are not charged with tax evasion, and fewer still convicted, these contractors would not necessarily be subject to the debarment and suspension process. None of the contractors described in this report were charged with tax evasion for the abuses of the tax system we identified. A prospective contractor’s tax noncompliance, other than tax evasion, is not considered by the contracting officer before deciding whether to award a contract. Further, no coordinated and independent mechanism exists for contracting officers to obtain accurate information on contractors that abuse the tax system. Such information is not obtainable from IRS because of a statutory restriction on disclosure of taxpayer information. As we found in November 2002, unless reported by prospective contractors themselves, contracting officers face significant difficulties obtaining or verifying tax compliance information on prospective contractors. Moreover, even if a contracting officer could obtain tax compliance information on prospective contractors, a determination of a prospective contractor’s responsibility under the FAR when a contractor abused the tax system is still subject to a contracting officer’s individual judgment. Thus, a business or individual with unpaid taxes could be determined to be responsible depending on the facts and circumstances of the case. Since the responsibility determination is largely committed to the contracting officer’s discretion and depends on the contracting situation involved, there is the risk that different determinations could be reached on the basis of the same tax compliance information. On the other hand, if a prospective contractor’s tax noncompliance results in mechanical determinations of nonresponsibility, de facto debarment could result. Further, a determination that a prospective contractor is not responsible under the FAR could be challenged. Because individual responsibility determinations can be affected by a number of variables, any implementation of a policy designed to consider tax compliance in the contract award process may be more suitably addressed on a governmentwide basis. The formulation and implementation of such a policy may most appropriately be the role of OMB’s Office of Federal Procurement Policy. The Administrator of Federal Procurement Policy provides overall direction for governmentwide procurement policies, regulations, and procedures. In this regard, OMB’s Office of Federal Procurement Policy is in the best position to develop and pursue policy options for prohibiting federal contract awards to businesses and individuals that abuse the tax system. Thousands of DOD contractors that failed in their responsibility to pay taxes continue to get federal contracts. Allowing these contractors to do business with the federal government while not paying their federal taxes creates an unfair competitive advantage for these businesses and individuals at the expense of the vast majority of DOD contractors that do pay their taxes. DOD’s failure to fully comply with DCIA and IRS’s continuing challenges in collecting unpaid taxes have contributed to this unacceptable situation, and have resulted in the federal government missing the opportunity to collect hundreds of millions of dollars in unpaid taxes from DOD contractors. Working closely with IRS and Treasury, DOD needs to take immediate action to comply with DCIA and thus assist in effectively implementing IRS’s legislative authority to levy contract payments for unpaid federal taxes. Also, IRS needs to better leverage its ability to levy DOD contractor payments, moving quickly to use this important collection tool. Beyond DOD, the federal government needs a coordinated process for dealing with contractors that abuse the federal tax system, including taking actions to prevent these businesses and individuals from receiving federal contracts. In view of congressional interest in both tax collection and government contracting, Congress may wish to consider the following two actions. Until such time as DOD is able to demonstrate that it is meeting its responsibilities under DCIA, including providing payment information to TOP for offsetting unpaid federal taxes, and to facilitate action by the department, Congress may wish to consider requiring that DOD report periodically to Congress on its progress in implementing DCIA for each of its contract and vendor payment systems. This report should include details of actual collections by system and in total for all contract and vendor payment systems during the reporting period. In addition, Congress may wish to consider requiring that OMB report to Congress on progress in developing and pursuing options for prohibiting federal government contract awards to businesses and individuals that abuse the federal tax system, including periodic reporting of actions taken. To improve collection of DOD contractor tax debt, we recommend that DOD take four corrective actions, IRS take four corrective actions, and OMB take one corrective action. To comply with the DCIA and support IRS efforts under the Taxpayer Relief Act of 1997 to collect unpaid federal taxes, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Comptroller) to take four long- and short-term actions. For the long term, we recommend that the Under Secretary develop a formal plan to implement DCIA by providing payment information to TOP for all DFAS payment systems. At a minimum, the plan should designate officials responsible for implementing DCIA responsibilities for each payment system, including firm implementation dates for each payment system. For the short term, we recommend that the Under Secretary collaborate with Treasury’s FMS to develop interim procedures for identifying active DOD contactors in TOP and develop manual procedures so that the levy of contractor payments can be started immediately for all DOD payment systems. For both the long and short term, we recommend that the Under Secretary devote sufficient resources to implementing all aspects of TOP and the DOD plan. To help improve the effectiveness of IRS collection activities, we recommend that the Commissioner of Internal Revenue capitalize on the potential of the FPLP by taking the following three actions: using the levy program as one of the first steps in the IRS collection changing or eliminating policies that prevent businesses and individuals with federal contracts from entering the levy program, and evaluating the cost versus benefits of keeping businesses and individuals in the levy program once placed in the program until the taxes are fully paid. We further recommend that the Commissioner of Internal Revenue evaluate the 47 referred cases detailed in this report and consider whether additional collection action or criminal investigation is warranted. To help ensure that the federal government does not award contracts to businesses and individuals that have flagrantly disregarded their federal tax obligations (e.g., failed to remit payroll taxes for several tax periods or broken installment agreements), we recommend that the Director of OMB develop and pursue policy options for prohibiting federal contract awards to contractors in cases in which abuse to the federal tax system has occurred and the tax owed is not contested. Options could include designating such tax abuse as a cause for governmentwide debarment and suspension or, if allowed by statute, authorizing IRS to declare such businesses and individuals ineligible for government contracts. We further recommend that any option OMB develops should consider whether additional legislation is needed; minimize administrative burdens on contracting officials, for example, by distributing the names of abusive contractors debarred, suspended, or declared ineligible on the governmentwide list of excluded parties that contracting officers are already required to check before awarding contracts; fully comply with the statutory restriction on disclosure of taxpayer address any necessary exceptions, such as when the goods or services cannot be obtained from other sources or for national security. We received written comments on a draft of this report from the Under Secretary of Defense (Comptroller) (see app. III) and the Commissioner of Internal Revenue (see app. IV). DOD concurred with three of the four recommendations and partially concurred with the remaining recommendation. However, DOD disagreed with our matter for congressional consideration related to progress reporting. For the three recommendations with which it concurred, DOD stated that actions are under way to address our recommendations and provided a schedule of estimated implementation dates for all DFAS vendor payment systems. The schedule estimates completion of 17 vendor payment systems by March 2005. However, our report discusses 15 vendor pay systems because, during our review, DOD represented that there were only 15 vendor payment systems. We encourage DOD to continue to identify additional payment systems to be included in its implementation schedule. DOD added that it will devote the necessary resources to support the offset/levy program and will reevaluate the level of resources as the program progresses. Although DOD concurred with our second recommendation regarding collaboration with Treasury for identifying active DOD contractors in TOP, the comments point out that for the one payment system that DOD has included in the levy program, the initial matches of contractors with the TOP database have been low. We did not review the methodology or process used by DFAS or by Treasury to make the matches. However, as stated in this report, we believe that an effective levy program at DOD would yield hundreds of million of dollars in tax collections. DOD further noted that it has been and will continue to be proactive in working with Treasury to generate as many collections as possible. With the exception of actions taken with the MOCAS system, this statement is not accurate. DOD’s comments in response to this report represent its initial schedule for reporting payment information to TOP for the 15 reported vendor payment systems through which it disbursed almost $97 billion to contractors in fiscal year 2002. Regarding the partial concurrence to our third recommendation dealing with development of manual procedures as a short-term corrective action, DOD stated that its implementation plan has been accelerated to 6 months for most payments systems, and that DOD’s focus should remain on implementing a system-based process rather than temporary manual procedures. As previously mentioned, until the drafting of DOD’s comments to this report, there were no formal plans for reporting payment information to TOP for any of DOD’s vendor payment systems. Therefore, there was no plan for DOD to accelerate. In addition, we believe that given the magnitude of potential collections, it is unreasonable to wait for a systems solution, which may not be available for a long time. Manual procedures should be employed so that the offset of DOD payments can be started immediately. Regarding the disagreement with the matters for congressional consideration, DOD stated that a requirement is not necessary for DOD to report to Congress on its progress in implementing the DCIA. We continue to believe that Congress may wish to consider such oversight since DOD has failed to fully implement the offset requirements of DCIA since its passage more than 7 years ago, and the federal government continues to miss opportunities to collect hundreds of millions of dollars in unpaid taxes owed by DOD contractors. IRS agreed with the issues raised in the report with respect to DOD contractors that abuse the federal tax system, and agreed that FPLP can become a more effective tool for collecting delinquent federal taxes owed by businesses and individuals that receive federal payments, including DOD contractors. Although IRS did not explicitly agree or disagree with the recommendations in our report, it noted a number of actions that it had taken or was taking to address the issues raised in this report, including steps to accelerate the collection of delinquent taxes. Specifically, IRS noted that it had made enhancements to its Inventory Delivery System to identify certain businesses with payroll taxes as high-priority work and that such cases would bypass the ACS phase of the collection process. IRS pointed out that it had made improvements to the cycle time of a number of its collection processes and cited recent improvements in expediting processing of offers in compromise. IRS stated that it had reviewed the systemic blocks on its FPLP procedures and information systems and, based on this review, will be making changes to its information systems to modify a number of blocks on cases in the queue and certain ACS business- related cases. IRS will also work with DOD to ensure that contractor TINs in the CCR database are accurate and will work with both DOD and OMB in support of any changes they make with respect to how the federal government deals with contactors with unpaid taxes. Finally, IRS indicated that it would review the 47 case studies included in our report and take additional action as appropriate. While IRS agreed with the issues raised in the report, it pointed out that the statutory requirements under which IRS must operate, coupled with concerns for taxpayer rights, sometimes require IRS to remove a taxpayer from FPLP or prevent it from taking any enforcement action. IRS added that such requirements and considerations require IRS to take a more balanced approach to FPLP versus a cost-benefit approach. We recognize the statutory environment in which IRS operates in its efforts to collect outstanding taxes and that statutory requirements affect how the FPLP is used. We continue to believe, however, that FPLP provides an effective, reliable means of ensuring at least some collections on unpaid taxes and that IRS needs to consider a more aggressive and likely administratively efficient approach, subject to legal requirements, for government contractors that fail to pay their tax debt. On January 15, 2004, we received oral comments from representatives of OMB’s Office of Federal Procurement Policy, Office of Federal Financial Management, and Office of the General Counsel. OMB questioned the need for developing or pursuing additional mechanisms to prohibit federal contract awards to “tax abusers.” OMB said that defining “tax abuse” would not be a function of OMB and would be more appropriate for the Treasury Office of Tax Policy or Congress. In addition, officials said that current FAR guidance on responsibility (48 C.F.R. Subpart 9.1) as well as causes for suspension and debarment (48 C.F.R. Subpart 9.4) and the Nonprocurement Common Rule on Suspension and Debarment, recently updated November 26, 2003 (68 Fed. Reg. 66533), provide contracting officers and grant officers with ample discretion to consider tax-related problems as a criterion for making awards. Specifically, they noted that FAR 9.104-1(d) requires prospective contractors to have, among other things, satisfactory records of integrity and business ethics. Accordingly, they said, failure to pay taxes or abuse of the tax system would be a factor in making this determination. OMB’s comments provide us no basis to change our recommendation that OMB develop and pursue policy options for prohibiting federal contract awards to contractors that abuse the tax system. While we agree with OMB that the definition of “tax abuse” should be developed in consultation with those government officials responsible for administering the nation’s tax laws, as the agency responsible for governmentwide procurement policy, we believe that OMB should assume a leadership role in ensuring that contractors that abuse the tax system are prohibited from receiving federal contracts. As we discussed in this report, contracting officers have the discretion to consider tax-related concerns in making determinations as to a contractor’s responsibility, specifically as to its record of integrity and business ethics. However, contracting officers are not required to consider a prospective contractor’s tax noncompliance, other than tax evasion, in deciding whether to award a contract and, as all 47 case studies in our report clearly illustrate, contracting officers are not doing so. There is no guidance for contracting officers on considering tax information, even if the information is legally available to them, nor is there any coordinated mechanism to help contracting officers obtain accurate information on contractors that abuse the tax system. As OMB pointed out, the existing suspension and debarment process includes an “other” category that provides for consideration of matters of “so serious or compelling a nature” that they affect a contractor’s present responsibility. However, OMB did not explain how this effectively prevents awards to contractors that abuse the federal tax system or provide examples of such debarred or suspended contractors. Because the debarment and suspension process does not appear to be preventing federal awards to contractors that abuse the tax system, we continue to suggest that tax abuse be specifically designated or authorized as a cause for debarment, suspension, or ineligibility. As agreed with your offices, unless you announce the contents of this report earlier, we will not distribute it until 30 days after its date. At that time, we will send copies to the Secretary of Defense; the Secretary of the Treasury; the Director, Office of Management and Budget; the Commissioner of the Financial Management Service; the Commissioner of Internal Revenue; the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Under Secretary of Defense (Comptroller); the Director, Defense Finance and Accounting Service; the Director, Defense Logistics Agency; and interested congressional committees and members. We will make copies available to others upon request. In addition, this report will be available at no charge on the GAO web site at http://www.gao.gov. Please contact Gregory D. Kutz at (202) 512-9095 or kutzg@gao.gov, John J. Ryan at (202) 512-9587 or ryanj@gao.gov, or Steven J. Sebastian at (202) 512-3406 or sebastians@gao.gov if you or your staff have any questions concerning this report. To identify DOD contractors, we obtained a copy of Department of Defense’s (DOD) Central Contractor Registration (CCR) database as of February 2003 from the Defense Logistics Information Service (DLIS) in Battle Creek, Michigan. Because DOD does not have all contractor information in a single automated system, the CCR database provided the best available source of DOD contractor information. To identify DOD contractors with unpaid federal taxes, we matched contractor records from the CCR database to Internal Revenue Service (IRS) tax records using the tax identification number (TIN) fields, which resulted in about 27,100 matching records with nearly $3 billion in unpaid taxes. We used data mining software to select, match, summarize, and report on DOD and IRS records. We also identified over 5,000 contractors with potentially invalid TINs by matching the contractor employer identification number (EIN) and Social Security number (SSN) fields from CCR to IRS tax records, and by providing an electronic file of contractor SSNs from CCR to the Social Security Administration for matching against its records. To evaluate DOD and IRS processes and controls over the collection of unpaid federal taxes, we discussed this issue and reviewed current policies and procedures with the Defense Finance and Accounting Service (DFAS), IRS, and Financial Management Service (FMS) officials. We did not audit the effectiveness of the DFAS process for providing Mechanization of Contract Administration Services (MOCAS) payment information to Treasury Offset Program (TOP). In December 2003, we obtained information from IRS on FPLP collections from MOCAS payments through September 2003. We visited the IRS Processing Center in Kansas City, Missouri, to help determine the effectiveness of the continuous levy program. In addition, we reviewed related laws and regulations governing the levy program and TOP process. To determine the DOD business activity of the about 27,100 contractors, we obtained copies of fiscal year 2002 payment files for five of the largest DOD payment systems: MOCAS for Defense Contract Management Agency (DCMA) payments, One Bill Pay for Navy payments, Integrated Accounts Payable System (IAPS) for Air Force payments, and Computerized Accounts Payable System (CAPS) Clipper and CAPS Windows for Army and Marine Corps payments. These payment files represented about 72 percent of the $183 billion disbursed to DOD contractors in fiscal year 2002. The five payment files are used to detect payment fraud and overpayments by the DFAS Internal Review group with the DOD Operation Mongoose program at the Defense Manpower Data Center in Seaside, California. Using TINs, we matched the about 27,100 contractors to the five fiscal year 2002 DOD payment files. We also estimated the potential fiscal year 2002 collections under an effective tax levy program of at least $100 million using the assumptions that all unpaid federal taxes were referred by IRS to FMS for inclusion in the TOP database, and fiscal year 2002 payment information from the five DOD payment files was provided to FMS for matching against the TOP database. The estimated collection amount under an effective tax levy program was calculated on 15 percent of the DOD contractor payments up to the amount of unpaid taxes. To identify indications of abuse or potential criminal activity, we selected a group of DOD contractors as case studies for a detailed audit and investigation. To select the case studies, we used the about 27,100 contractors described above and, using TINs, we matched the contractors to the five fiscal year 2002 DOD payment files. This matching yielded about 8,500 active DOD contractors, which we further reduced based on the amount of unpaid taxes, number of unpaid tax periods, and DOD contractor payments. We reviewed the IRS tax records and excluded contractors that had recently paid off their unpaid tax balances or were categorized by IRS as compliance assessments, and considered other factors before reducing the number of cases for study to 47. We selected 34 businesses and 13 individuals for further audit and investigation, and obtained copies of their automated tax transcripts from IRS as of May 2003. We reviewed the transcripts for any steps taken to resolve the unpaid taxes. We also obtained detailed tax records (e.g., tax returns, revenue officer notes, and collection and assessment files) and reviewed them at the IRS processing center in Kansas City, Missouri. We obtained additional information from IRS to determine what enforcement actions had been taken against these contractors. For the 47 case studies, we identified DOD contract awards using the DOD Electronic Document Access system, and had criminal, financial, and public record searches performed by our Office of Special Investigations (OSI). We provided the case study list to FMS to identify the tax and nontax debt in the TOP database. For some case studies, we contacted the responsible DOD contracting officers to inquire about the contractors’ goods or services, performance, and current DOD contracts. OSI investigators contacted some contractors and performed interviews in California, the District of Columbia, Maryland, Michigan, Pennsylvania, Texas, and Virginia. To determine whether DOD contractors with unpaid federal taxes are prohibited by law from receiving contracts from the federal government, we reviewed prior GAO work and relevant laws. We performed our work at DOD headquarters in Arlington, Virginia; the DFAS office in Columbus, Ohio; the DLIS in Battle Creek, Michigan; the Defense Manpower Data Center in Seaside, California; IRS and FMS headquarters in Washington, D.C.; and the IRS processing center in Kansas City, Missouri. Tables 2 and 3 provide data on 17 detailed case studies. Tables 4 and 5 show the 30 remaining business and individual case studies that we audited and investigated. As with the 17 cases discussed in the body of this report, we also found substantial abuse or potentially criminal activity related to the federal tax system during our review of these 30 case studies. The case studies involving businesses with employees primarily involved unpaid payroll taxes, some for as many as 62 tax periods. The case studies involving individuals primarily involved unpaid income taxes. In addition to the individuals named above, Tida Barakat, Gary Bianchi, Ray Bush, William Cordrey, Francine DelVecchio, K. Eric Essig, Kenneth Hill, Jeff Jacobson, Shirley Jones, Jason Kelly, Rich Larsen, Tram Le, Malissa Livingston, Christie Mackie, Julie Matta, Dave Shoemaker, Wayne Turowski, Jim Ungvarsky, and Adam Vodraska made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
GAO was asked to determine (1) the magnitude of unpaid federal taxes owed by Department of Defense (DOD) contractors, (2) whether indications exist of abuse or criminal activity by DOD contractors related to the federal tax system, (3) whether DOD and the Internal Revenue Service (IRS) have effective processes and controls in place to use the Treasury Offset Program (TOP) in collecting unpaid federal taxes from DOD contractors, and (4) whether DOD contractors with unpaid federal taxes are prohibited by law from receiving contracts from the federal government. DOD and IRS records showed that over 27,000 contractors owed about $3 billion in unpaid taxes as of September 30, 2002. DOD has not fully implemented provisions of the Debt Collection Improvement Act of 1996 that would assist IRS in levying up to 15 percent of each contract payment to offset a DOD contractor's federal tax debt. We estimate that DOD could have collected at least $100 million in fiscal year 2002 had it and IRS fully utilized the levy process authorized by the Taxpayer Relief Act of 1997. As of September 2003, DOD had collected only about $687,000 in part because DOD provides contractor payment information from only 1 of its 16 payment systems to TOP. DOD had no formal plans at the completion of our work to provide payment information from its other 15 payment systems to TOP. Furthermore, we found abusive or potentially criminal activity related to the federal tax system through our audit and investigation of 47 DOD contractors. The 47 contractors provided a variety of goods and services, including parts or support for weapons and other sensitive military programs. The businesses in these case studies owed primarily payroll taxes with some dating back to the early 1990s. These payroll taxes included amounts withheld from employee wages for Social Security, Medicare, and individual income taxes. However, rather than fulfill their role as "trustees" and forward these amounts to IRS, these DOD contractors diverted the money for personal gain or to fund the business. For example, owners of two businesses each borrowed nearly $1 million from their companies and, at about the same time, did not remit millions of dollars in payroll taxes. One owner bought a boat, several cars, and a home outside the United States. The other paid over $1 million for a furnished home. Both contractors received DOD payments during fiscal year 2002, but one went out of business in 2003. The business, however, transferred its employees to a relative's company (also with unpaid taxes) and recently received DOD payments on a previous contract. IRS's continuing challenges in collecting unpaid federal taxes also contributed to the problem. In several case studies, IRS was not pursuing DOD contractors due to resource and workload management constraints. For other cases, control breakdowns resulted in IRS freezing collection activity for reasons that were no longer applicable. Federal law does not prohibit contractors with unpaid federal taxes from receiving federal contracts. OMB is responsible for providing overall direction to governmentwide procurement policies, regulations, and procedures, and is in the best position to develop policy options for prohibiting federal government contract awards to businesses and individuals that abuse the tax system.
Within DNN, two programs manage research and technology development projects: the DNN Research and Development (DNN R&D) program and the Nonproliferation and Arms Control (NPAC) program. These two programs have established program areas to focus their research and technology development activities. DNN R&D program areas. Within the DNN R&D program, two program areas pursue research and technology development projects: Nuclear Detonation Detection and Proliferation Detection. The Nuclear Detonation Detection program area develops and provides global monitoring capabilities for detecting foreign nuclear weapon detonations, in order to meet both treaty monitoring and military needs. The Proliferation Detection program area develops capabilities to detect special nuclear materials (which include plutonium and highly enriched uranium), to detect weapons production and movement, and to enhance transparent nuclear reductions and monitoring. These two program areas each are divided into three functional areas. The Nuclear Detonation Detection program area’s three functional Space-based Nuclear Detonation Detection, which develops and builds space-based sensors for nuclear explosion detection; Ground-based Nuclear Detonation Detection, which produces and updates modeling and analysis capabilities—such as software codes and algorithms that help distinguish underground nuclear detonations from natural seismic events—and provides other technical support for the nation’s ground-based nuclear explosion monitoring networks; and Forensics-based Nuclear Detonation Detection, which conducts R&D to advance capabilities in the field of nuclear forensics analysis. The Proliferation Detection program area’s three functional areas Nuclear Weaponization and Material Production Detection, which supports the development of technology to detect and characterize the production of nuclear weapons and related materials by foreign entities; Nuclear Weapons and Material Security, which, among other things, develops tools for nuclear security, treaty monitoring and verification, and operational interdiction and nuclear security efforts across NNSA; and Enabling Capabilities, which develops cross-cutting technologies applicable to multiple NNSA and interagency missions. NPAC program areas. Within the NPAC program, two program areas pursue technology development projects: International Nuclear Safeguards (Safeguards), among other things, develops technologies to detect and deter undeclared nuclear materials and activities. Specifically, Safeguards’ activities include developing and transferring tools, technologies, and approaches to improve U.S., IAEA, and IAEA member states’ capabilities in undertaking IAEA safeguards activities, such as monitoring uranium enrichment levels in nuclear facilities. Nuclear Verification (Verification), among other things, develops and deploys technologies to maintain the United States’ ability to monitor and verify nuclear reduction agreements, detect treaty violations, and verify other nuclear nonproliferation commitments. The goals of the Verification program area include developing technologies to verifiably and irreversibly disable nuclear facilities in countries of concern. Figure 1 illustrates the structure of the DNN programs and functional areas that manage research and technology development projects. From fiscal year 2012 through fiscal year 2015, DNN R&D and NPAC obligated a total of more than $1.1 billion on 511 research and technology development projects. The DNN R&D program obligated more than $1 billion on 420 projects, and the NPAC program obligated about $73 million on 91 projects. According to NNSA officials we interviewed, the Space-based Nuclear Detonation Detection functional area expenditures were mostly for production activities that resulted in technologies deployed to end users, rather than for basic and applied R&D, in contrast with the other DNN R&D program functional areas. DNN R&D and NPAC projects are undertaken at DOE and NNSA national laboratories and sites; these projects generally last about 3 years from beginning to end, according to program officials. Table 1 shows the number of projects undertaken and funding obligated in each program and functional area from fiscal years 2012 through 2015. According to NNSA officials, the end users of technologies resulting from DNN’s research and technology development programs include DOD, including its Defense Threat Reduction Agency, U.S. Strategic Command, and Air Force Technical Applications Center; DHS; State; IAEA and other international organizations, including the CTBTO; international governments; universities; and private industry, including small business. NNSA itself also uses some of the technologies that result from DNN’s research and technology development programs. The full extent to which research and technology development projects managed by NNSA’s DNN R&D and NPAC programs have resulted in advanced, transitioned, or deployed technologies is unclear because NNSA does not consistently track and document all of these project outcomes. However, by reviewing a range of information on a nongeneralizable sample of 91 DNN research and technology development projects, as well as interviewing end users of technologies that resulted from some of the projects, we were able to determine the outcomes of these projects. More specifically, we found that 88 of the 91 projects in the sample resulted in technologies being advanced—that is, the project progressed the technology itself or the scientific knowledge behind the technology. Additionally, we found that, among these 88 projects, 33 resulted in technologies being transitioned—that is, provided to users outside of the project team for further development or deployment. Finally, we found that 17 of these 33 projects resulted in deployed technologies—that is, we were able to confirm that a technology was being actively used in the field by a federal agency or foreign partner. The full extent to which NNSA’s research and technology development projects result in advanced, transitioned, or deployed technologies is unclear because NNSA does not consistently track and document all of these project outcomes. Both the DNN R&D and NPAC programs track and document projects’ results in advancing technologies. For instance, the DNN R&D program uses an online system, WebPMIS, to track project costs and maintain an archive of project-related documentation. This documentation includes project reports that provide information on the scientific and technological advancements that result from a project. Similarly, the NPAC program provided documentation of technology advancement for our review. However, the DNN R&D program, which is by far the larger program, does not consistently track and document projects that result in technologies being transitioned or deployed. NNSA officials acknowledged that the DNN R&D program does not use the WebPMIS system, or otherwise collect and maintain information, to track and document whether its projects result in technologies that have been transitioned to or deployed by end users. As a result, the DNN R&D program was not able to readily provide us with information on the extent to which its projects result in transitioned or deployed technologies. In contrast, we found that the NPAC program tracks and documents the extent to which its safeguards– and verification-related projects have resulted in technology transition or deployment, as well as tracking scientific and technological advancements. For example, NNSA officials provided us with information on all NPAC projects for the past 5 fiscal years that showed whether technologies that projects developed have been deployed; they also provided documentation of technologies that have been transitioned to end users. Both the DNN R&D and NPAC programs list technology transition—which includes providing technologies to end users, who may then deploy them—as a goal in strategic planning documents. In addition, NPAC program officials told us that their program evaluates its success in part based on the extent to which end users deploy technologies that the NPAC program develops. Similarly, DNN R&D program officials said that meeting end users’ technology needs constitutes an important part of their program mission. Moreover, under federal internal control standards, management should design control activities to achieve objectives and respond to risks, and should record, communicate, and use quality information to enable managers to carry out internal control responsibilities and evaluate program performance in achieving key objectives. Differing program emphases—both between the DNN R&D and NPAC programs and among the DNN R&D program’s functional areas—may account for the inconsistencies we found in NNSA’s tracking and documentation of transition– and deployment-related project outcomes. For instance, the NPAC program generally focuses on developing technologies at higher readiness levels to prepare them for transition and deployment in the field, according to NNSA officials. Conversely, most of the DNN R&D program’s functional areas conduct R&D on technologies that are further from being deployment-ready because they require more basic work at lower technology readiness levels, with some exceptions, NNSA officials told us. Notably, the DNN R&D program’s Space-based Nuclear Detonation Detection functional area has a history, dating to the 1960s, of researching, developing, and producing successive upgrades to technologies used by the U.S. Atomic Energy Detection System and its space-based element, the U.S. Nuclear Detonation Detection System, to monitor and detect nuclear detonations, NNSA officials stated. Officials in the DNN R&D program office said that well-defined DOD technology requirements and delivery schedules drive these research, development, and production activities; as a result, the Space-based Nuclear Detonation Detection functional area has a stronger focus on technology production and deployment than the other DNN R&D program functional areas, which typically do not have similarly specific requirements and schedules. relevant data from reliable internal and external sources, processing the data into quality information, and then considering the data to make informed decisions and evaluate program performance. providing end users with tools or knowledge that are sufficiently ready for the end user to take the final steps necessary for the technology’s use in mission-specific purposes, such as adapting a detection algorithm or producing a field-ready detector array. It can be difficult to trace a link between a project and a given result— for example, when another agency builds on a project’s work; when a discrete project does not lead directly to an outcome, but results in a “spin-off” project or is part of a related portfolio of projects that does lead to the desired outcome; or when there is a significant period of time between the end of a project and the deployment of any resulting technology. Project managers at DOE and NNSA national laboratories know the outcomes of projects and can provide information to NNSA officials when needed. Information about some nonproliferation technologies is classified, with the result that NNSA is not informed about their uses and users. These factors notwithstanding, by not consistently tracking and documenting all relevant outcomes of projects across DNN’s research and technology development program areas, NNSA is unable to demonstrate to Congress and the public the full extent to which its projects result in technologies that are transitioned, deployed, or otherwise used toward fulfilling the nation’s nuclear nonproliferation goals. Moreover, not maintaining documentary information about project results may affect control activities by limiting the quality of information that NNSA has available to evaluate project performance; it also may reduce opportunities for sharing knowledge, and may make DNN vulnerable to the loss of institutional memory when personnel involved with a project retire or leave the agency. By gathering information from various sources about our sample of 91 selected DNN research and technology development projects, we were able to determine that some projects resulted in technologies being advanced, transitioned, or deployed, or that they resulted in more than one of these outcomes. More specifically, we found that nearly all of the projects we selected (88 of 91) advanced the technology itself or the scientific knowledge behind the technology. Of those 88 projects, we found that 33 also resulted in the advanced technology being transitioned to users outside of the project team for further development or deployment. Among the 33 projects that resulted in a transitioned technology, 17 also resulted in active technology deployment—that is, field use by a federal agency or foreign partner. Figure 2 below illustrates these results. Almost all of the selected projects in our sample (88 of 91) advanced technologies in some way. Examples of projects that advanced technologies include projects in which NNSA scientists built instrument hardware, made progress in developing new technology combinations or applications, developed an algorithm or model to further data analysis, or made scientific advances that improved nonproliferation analysis and operations. Some of the projects in our sample that have resulted in technologies being advanced have not resulted in technologies being transitioned to end users (55 of 88 such projects). According to NNSA officials and scientists we interviewed, technology transition has not occurred on these projects for a variety of reasons, including: The technology is not developed enough to be transitioned. For example, a DNN R&D Proliferation Detection project developed a technology to provide new capabilities in imaging gamma rays. The imager is a laboratory prototype and needs further development, according to a scientist we interviewed at Oak Ridge National Laboratory. The project has not connected with an end user who wants to receive the technology. For example, a DNN R&D Proliferation Detection project developed a hand-held fast-neutron generator, but NNSA has yet to find an end user interested in the technology, according to the manager of the Nuclear Threat Reduction Program at Lawrence Livermore National Laboratory. The project developed a technology to meet a future nonproliferation need rather than an existing requirement. For example, a project within the NPAC Verification program developed a fast-neutron imaging technology that could be used in stockpile warhead verification, should a future arms control agreement call for warhead inspections. The technology currently is not being used in a verification context, but NNSA officials said it is in use at NNSA sites to compile benchmark data on nuclear weapons; these data may later be used in verification and other national security missions. We determined that 33 of the 91 projects in our sample resulted in technologies that were transitioned to end users. These 33 projects include 17 projects that resulted in deployed technologies, since a technology is transitioned to an end user before it is deployed. For the remaining 16 projects, we found that either the technologies had not been deployed by an end user (13 of the 16) or we could not confirm with end users that the technologies had been deployed (3 of the 16). Specifically: We counted 13 of the 16 projects and their resulting technologies as transitioned but not deployed because the end users need to take certain steps before the technologies can be deployed, according to NNSA and end user agency officials. For example, one DNN R&D Nuclear Detonation Detection project transitioned an instrument to a DOD end user. However, the instrument has not been deployed because DOD needs to find funds to further develop the technology, which, according to DOD officials, was at a low level of technology readiness when transitioned. In another case, a DNN R&D Ground- based Nuclear Detonation Detection project transitioned software to a DOD end user, but the user is still evaluating the software before integrating it into its systems. For the remaining 3 of the 16 projects, we could not confirm whether technologies that resulted from the projects and were transitioned to end users are being used. We were able to confirm that 17 of the 91 projects in our sample resulted in actively deployed technologies. More specifically, we found that 13 of the 63 DNN R&D projects and 4 of the 28 NPAC projects in our sample resulted in deployed technologies. Examples of the projects that resulted in deployed technologies include: An NPAC Safeguards project that developed an online enrichment monitor (OLEM) for use in uranium enrichment monitoring. As we reported in June 2016, the IAEA is using the OLEM in the Natanz Fuel Enrichment Plant in Iran to confirm that enrichment levels are at or below 3.67 percent, per Iran’s commitment under the agreement known as the Joint Comprehensive Plan of Action (JCPOA). IAEA has previously used enrichment monitors, but the OLEM is a newer technology that improves upon older monitoring systems, as we found in our June 2016 report. DNN R&D Space-based Nuclear Detonation Detection projects that resulted in global burst detection sensors that are in use by the Air Force on satellites to detect, identify, and precisely locate nuclear explosions. DNN R&D Ground-based Nuclear Detonation Detection projects that resulted in enhanced understanding of waves produced by underground nuclear explosions. The Air Force, the CTBTO, and international scientists analyze these waves to locate underground nuclear explosions and establish their yield. Of the 17 projects that resulted in deployed technology, 9 were active or ongoing projects and 8 were retired or in closeout, meaning that all of the research work was completed. Similarly, of the 33 projects that resulted in transitioned technology, 15 were active or ongoing projects and 18 were retired or in closeout. Projects resulting in deployed and transitioned technologies can continue to be active for the following reasons: In some cases, project officials provide maintenance and support to the technology after it has been deployed. For example, the OLEM unit has been deployed, but the project is still active because NNSA is continuing to address end user maintenance needs, according to agency officials. In other cases, the project has deployed or transitioned a technology but has yet to accomplish research goals. For example, a Nuclear Detonation Detection project has deployed a technology to a DOD end user, but research continues because the project has not met all of the research objectives. See appendix II for a list of the projects in our sample that resulted in deployed technologies. Appendix III provides information on the number of advanced, transitioned, and deployed technologies that resulted from the 91 projects we selected for our sample, subdivided by DNN program and functional areas. NNSA’s DNN R&D and NPAC programs use publicly reported measures to assess program-level performance, and both programs review project documentation and communicate with project managers to assess project-level performance. Clarity limitations with the DNN R&D program’s publicly reported measures, however, make them difficult to interpret, and improvements in NNSA’s final project documentation could enhance assessment of project performance. NNSA reports publicly on the DNN R&D and NPAC programs’ performance using measures published in DOE’s annual budget requests, in response to requirements in GPRAMA; however, limitations with the clarity of the DNN R&D program’s measures may make it difficult for users, such as Congress, to understand the targets that NNSA has established for its research and technology development programs, as well as how NNSA has measured performance against these targets. The NNSA budget materials we reviewed present a total of six GPRAMA measures associated with the DNN R&D and NPAC programs. Specifically, the DNN R&D program reports on five GPRAMA measures covering both the Nuclear Detonation Detection and Proliferation Detection program areas. The NPAC program reports on one measure for its Safeguards program area, but does not have a GPRAMA measure for its Verification program area. The six GPRAMA measures identify specific performance timeframes and endpoint targets, and the publicly reported information summarizes NNSA’s assessment of the programs’ performance against the targets. Appendix IV shows the NPAC and DNN R&D performance measures as presented in NNSA’s budget request for fiscal year 2017, including endpoint targets and NNSA’s assessment of the programs’ performance against these targets. We found that the DNN R&D program’s performance measures are unclear because the program does not define targets or explain its assessments of performance against the targets in sufficient context to allow users to interpret the measures or performance assessments. For example, regarding three of the DNN R&D program’s measures: The Nuclear Detonation Detection measure tracks percentage of progress against an annual index summarizing the status of all NNSA Nuclear Detonation Detection R&D deliveries that improve the nation’s ability to detect nuclear detonations. The Nuclear Weapons and Material Security measure tracks cumulative percentage of progress toward demonstrating improvements in special nuclear material detection, warhead monitoring, chain-of-custody monitoring, safeguards, and characterization capabilities. The Nuclear Weaponization and Material Production Detection measure tracks cumulative percentage of progress toward demonstrating improvements in detection and characterization of nuclear weapons production activities. For these three measures, the baseline criteria for assessing performance—the annual index and the cumulative percentages of progress—are not defined, and no justification is presented to clarify how NNSA concluded that it had met its performance targets. Without such context, the measures provide statements of NNSA’s assessment of their own performance, but do not provide users with information they can use to evaluate whether NNSA’s assessments are valid and justified. Previous GAO reports have identified key attributes of successful performance measures, including that they be clearly stated to enable entities to assess their own performance and to enable stakeholders to determine whether a program is achieving its goals. In addition, our December 2011 report on NNSA program management and coordination challenges in the area of nuclear nonproliferation identified concerns with the clarity of performance measures used by the DNN programs we review in this report. Specifically, we found that several of the DNN R&D program’s performance measures were linked to secondary criteria that are classified, official use only, or otherwise not publicly stated, making it difficult for third-party users to interpret the measures and discern whether the criteria are appropriate, sufficient, and up to date for tracking purposes. We recommended in our 2011 report that NNSA clarify these measures. NNSA neither agreed nor disagreed with this recommendation. NNSA officials told us, when we interviewed them for this report, that they do not agree that the DNN R&D program’s performance measures are unclear, for two reasons. First, according to NNSA officials, NNSA provides supplemental information for Office of Management and Budget (OMB) users of the performance information that explains the targets and how performance is measured. Specifically, NNSA officials told us that the DNN R&D program provides additional documentation to OMB— some of which the officials also provided to us for review—that includes information explaining the progress against annual targets under each of the program’s performance measures. However, most of the information that NNSA provided on the DNN R&D program is classified or official use only. Second, an NNSA official told us that the DNN R&D program performance measures are not unclear because the measures are used primarily to inform internal program managers who are familiar with classified requirements against which progress is assessed, rather than to provide information to third-party users. However, there may be potential external users of the performance information—such as congressional decision makers and other external parties—who are not familiar with the baseline requirements and may have difficulty interpreting the level of performance achieved by the DNN R&D program. In the current review, the clarity issues we found with the DNN R&D program’s performance measures were similar to those we identified in December 2011. We continue to believe that NNSA should clarify these publicly reported measures, so that users of the performance information—including Congress and the public—have a sufficient basis to judge the programs’ performance in meeting the United States’ nuclear nonproliferation technology needs. NNSA assesses the performance of the DNN R&D and NPAC programs’ projects by reviewing project documentation and communicating with project managers, but improvements in NNSA’s final project documentation could enhance assessment of project performance. To assess project performance, NNSA officials said that both programs track progress during the project by communicating with project managers at the national laboratories and by reviewing documents, such as quarterly project reports, that provide information on project performance. According to NNSA documentation we reviewed and NNSA officials we interviewed, the DNN R&D and NPAC programs establish baseline targets for projects’ scope of work and completion date in project plans. In some cases, initial project plans are updated—for example, to incorporate new or modified project objectives. At the end of the project, project managers at the national laboratories produce final project reports for program managers to review. NNSA officials stated that these final project reports describe a project’s technical results. However, the officials said that the final reports do not document their assessment of project performance against the baseline targets established in the initial project plans. They also said that there is no common template for final project reports. We confirmed this in our review of several final project reports. NNSA officials provided several reasons why they do not document their assessment of project performance against baseline targets or use a common template for final project reports. For example, according to NNSA officials: Program managers communicate with project managers at the laboratories to ensure that projects are proceeding as planned. Restating the baseline targets in final project reports would be duplicative of information contained in other sources, such as the WebPMIS database. The varying formats of the laboratories’ final reports do not affect program managers’ ability to understand the technical outcomes of the projects. According to NNSA officials, initial baselines may change through “progressive baselining”—which entails changing the initial project plans when discoveries warrant or course corrections are needed, subject to a defined change control process and the approval of the federal program manager. Federal internal control standards specify that managers should use quality information to make informed decisions and evaluate performance; that managers need to compare actual performance to planned or expected results; and that entities should record and communicate information to managers who need it to carry out their internal control responsibilities. Moreover, the standards specify that program managers need data to determine whether their programs are meeting their goals for accountability for effective and efficient use of resources. Documenting assessments that compare final project performance results against baseline targets for scope of work and completion date—whether through a common template for final project reports or by other means—could enhance NNSA’s ability to manage its programs in accordance with these standards. Both the DNN R&D and NPAC programs use similar five-step processes to decide which research and technology development projects to pursue. According to NNSA officials and stakeholders we interviewed at technology end user agencies, NNSA collaborates throughout the decision-making and project selection process with the end users. In our review of program documents and interviews with agency officials, we found that the DNN R&D and NPAC programs generally take the following steps in their project selection processes: Step 1: DNN establishes project requirements. According to NNSA officials, DNN R&D and NPAC project requirements flow from national policy documents, such as the Nuclear Posture Review; additionally, NPAC project requirements flow from international sources such as the Treaty on the Non-Proliferation of Nuclear Weapons (NPT), Comprehensive Safeguards Agreements; Additional Protocols; and the IAEA Department of Safeguards Long Term R&D Plan. To identify the technical capabilities that are required to meet the requirements established in these documents and sources, NNSA officials develop more detailed plans, such as the DNN R&D program’s goals, objectives, and requirements (GOR) documents. The GOR documents specify requirements for five of the DNN R&D program’s six functional areas—all but the cross-cutting Enabling Capabilities functional area. Additionally, each GOR document has a corresponding technology roadmap, which establishes detailed technical requirements to meet the needs identified in the GOR document. Similarly, the NPAC program develops needs documents that identify specific needs and corresponding capabilities to develop; NPAC uses these documents to guide the proposal selection process. NNSA consults with interagency stakeholders to inform the requirements specified in the GOR documents and needs documents. Step 2: The DNN programs issue annual calls for proposals. Both programs issue annual calls for proposals, which specify the technical requirements proposals should address. The programs also require that those who submit proposals clearly explain how the proposed project relates to identified project requirements. According to NNSA officials, the DNN R&D program typically issues its calls for proposals in mid-October and NPAC issues its calls for proposals during the spring. The DNN R&D program’s calls for proposals specify weighted criteria against which proposals are reviewed, including mission relevance, scientific and technical merit, and budget. The NPAC program’s calls for proposals differ between its two program areas. The Safeguards program area provides explicit evaluation criteria including cost effectiveness, technical maturity, and deployment potential. Calls for proposals in the Verification program area do not provide explicit evaluation criteria, but they provide information on program goals and areas of technical consideration for the current budget year. They also require applicants to describe the proposed work, including activities, milestones, and deliverables. Step 3: National laboratories submit informal project proposals. Scientists at the national laboratories respond to the call for proposals by submitting informal project proposals, known as white papers, to the NNSA program offices. The white papers describe proposed projects to demonstrate new capabilities or technologies sought in the calls for proposals. As specified in the DNN R&D program’s calls for proposals, the program uses a three- tiered system to rank proposals—specifically, by designating proposals as hot, warm, or cold—and provides comments to improve the proposals or future white paper submissions. According to an NNSA official, the DNN R&D program returns the informal project proposals to the original submitters. All hot and warm proposals may be amended in response to program officials’ comments and then resubmitted in the next step of the process as formal project proposals. NNSA officials stated that cold proposals receive no further consideration. NNSA officials within the NPAC program stated that they use a ranking system that is similar to the DNN R&D program’s system; the officials also provided documentation that shows their rationale for selecting or not selecting projects for further consideration. Both programs seek to provide feedback within a general time frame. The DNN R&D program aims to provide feedback within 2 weeks of receiving the white papers. According to NNSA officials, NPAC’s Safeguards program area provides feedback on the white papers about 3 to 3 1/2 months from the date of submission, and the Verification program area provides feedback about 2 1/2 months from the date of submission. According to NNSA officials and interagency stakeholders we interviewed, interagency stakeholders are consulted during this step of the process, as appropriate. Step 4: Laboratories submit formal project proposals. Scientists at the national laboratories amend hot and warm white papers and resubmit them as formal proposals for NNSA’s consideration. Step 5: DNN selects projects in consultation with laboratories and end users. NNSA officials identify the formal proposals that best meet the criteria and requirements specified in the calls for proposals, considering input provided by laboratory officials, interagency stakeholders, and potential end users on each proposal. According to NNSA officials, after discussing budgetary considerations and current, prioritized needs with interagency stakeholders and potential end users, program officials develop a final, ranked list of projects selected to receive funding and support. Program officials then circulate this final list to interagency stakeholders and potential end users. Figure 3 shows the general five-step selection process used by DNN’s programs. NNSA’s research and technology development programs make vital contributions to national security, and the projects they support address a range of important proliferation detection and monitoring needs, including improving abilities to detect covert nuclear material production, nuclear explosions, and potential violations of IAEA safeguards agreements. It is not clear, however, how often NNSA’s research and technology development projects—especially those supported by the DNN R&D program—result in transitioned or deployed technologies because NNSA does not consistently track and document such information. We acknowledge that DNN’s research and technology development programs pursue a broad range of scientific tools and capabilities, with the goal of ensuring that the United States remains prepared to deal with unanticipated nonproliferation challenges as they emerge. Therefore, we recognize that not all R&D projects, such as projects conducted at low levels of technology readiness, are intended to result in a deployed technology, and that a higher deployment rate does not necessarily indicate a more successful R&D program, or vice versa. Moreover, we recognize that in cases where technology transition and deployment are project goals, certain challenges complicate DNN’s efforts to track the deployed technologies that result from its projects. Nevertheless, we were able, as part of our review of selected projects, to track projects’ end results, including end users’ deployment of technologies that result from the projects. Moreover, not tracking such end results may affect control activities by limiting the quality of information that NNSA has available to evaluate project performance. More consistently tracking and documenting the transitioned and deployed technologies that result from its projects would help ensure that NNSA maintains and uses quality information to evaluate its performance and achieve its objectives, in keeping with federal internal control standards. Doing so would also facilitate knowledge sharing within DNN, and it would provide a means by which to present valuable information to Congress and other decision makers about the programs’ results and their overall value. In addition, better tracking of project results by the DNN R&D program may have the salutary benefit of providing information to the program that could allow it to develop clearer program performance measures, as we recommended in December 2011. The DNN R&D program’s performance measures continue to have clarity limitations similar to the ones we identified in the December 2011 report, because understanding the publicly reported performance information depends on users of the information having access to or being familiar with criteria in official use only or classified documents. We continue to believe that it is important for NNSA to implement our December 2011 recommendation that NNSA clarify the DNN R&D program’s publicly reported performance measures, so that Congress and other users of these measures have a sufficient understanding of DNN programs’ status and progress. By documenting assessments that compare the final results of a project against the baseline targets for scope of work and completion date that are established in initial plans for each project, NNSA could improve the quality of the information it needs to evaluate performance at the end of the project. Specifically, taking these steps could enhance managers’ ability to use quality data to assess actual project performance against planned or expected results, consistent with federal internal control standards. Such assessments are essential to ensure that the DNN R&D and NPAC programs’ research and technology development projects are effectively and efficiently using resources, another key aspect of federal internal control standards. If changes to the initial baselines are necessary—which may be the case in some instances—we believe that they should be made in such a way that initial project targets are still documented and available for review in evaluating the full progression of a project. We recommend that the NNSA Administrator take these two actions: Direct the DNN R&D program to track and document the transitioned and deployed technologies that result from its research and technology development projects, to the extent practicable. Direct the DNN R&D and NPAC programs to document, using a common template or other means, their assessment that compares the final results of each project against the baseline targets established in each project’s initial project plan. We provided drafts of this report to NNSA, DOD, DHS, and State for review and comment. In emails, DOD, DHS, and State stated that they had no comments on the report. In NNSA’s written comments, which are summarized below and reproduced in appendix V, NNSA neither agreed nor disagreed with our first recommendation and partially agreed with our second recommendation; however, NNSA stated that it plans to take actions in response to both recommendations. NNSA also provided technical comments, which we incorporated as appropriate. In its written comments, NNSA did not state whether it agreed or disagreed with the recommendation that, to the extent practicable, the DNN R&D program track and document transitioned and deployed technologies resulting from the program’s projects, but it described planned actions consistent with the recommendation. NNSA stated that our report did not clearly or fully disclose reasons that the DNN R&D program does not track and document transition and deployment of technologies resulting from its projects. Specifically, NNSA commented that the DNN R&D program’s space-based activities—unlike the other DNN R&D program areas—are production and deployment oriented, and that it would therefore be misleading to track individual projects in this area that are transitioned and deployed. Regarding the other DNN R&D program areas, NNSA commented that it does not have ongoing insight into transition and deployment outcomes for various reasons—for example, due to classification barriers regarding the transition and deployment of certain technologies. NNSA also expressed concern that information on such outcomes may not be readily available or may not be cost-effective to obtain. Our report discusses a number of reasons why the agency may not consistently track the deployment– and transition- related outcomes of the DNN R&D program’s projects and notes the production and deployment orientation of the space-based project portfolio. However, we disagree that tracking the outcomes of these projects would be misleading. To the contrary, it is vital to track these outcomes, as the space-based program area receives the largest share of the DNN R&D program’s funding. Moreover, we believe that information on project outcomes should be readily available to NNSA program officials in the course of their regular interactions with the project managers at the national laboratories and the interagency end users of nonproliferation technologies—as was the case with projects in our sample. NNSA agreed that, where such information is readily available, reporting of transitioned and deployed technologies can provide information on program successes, and it identified actions it will take consistent with the recommendation. Specifically, NNSA stated it will complete an assessment by June 2017 of the DNN R&D program’s portfolio to determine to what extent transition and deployment data are readily available for specific projects and include additional information on those projects as part of its performance reporting. If implemented as planned, such an assessment would be a step toward ensuring that NNSA maintains and uses quality information to evaluate its performance and achieve its objectives, in keeping with federal internal control standards. NNSA stated that it agrees in part with our second recommendation—that NNSA document, using a common template or other means, its assessment that compares the final results of each DNN R&D and NPAC project against the baseline targets established in each project’s initial project plan. NNSA stated that it supports implementation of a common template for documenting project closeout to provide consistency in information reporting, but it noted that R&D projects are dissimilar to acquisition activities and that comparing final project results to an initial project baseline would be misleading. Specifically, for exploratory R&D projects, NNSA commented that project outcomes can be “significantly indeterminate” at the beginning of a project and stated that it uses an approach known as “progressive baselining” to adjust plans and benchmarks as research matures, where appropriate. NNSA stated that final project results should be compared to the final approved plans to ensure consistency in project documentation and to provide a meaningful assessment of project performance. NNSA agreed that information on the project’s initial baseline is important to understand the progression of the project. We acknowledge that adjustments to initial project plans may be appropriate in some cases, provided that the initial plans remain available and transparent and that decisions behind such revisions are well- documented, consistent with federal internal control standards. NNSA stated it will establish a common template by June 2017 and use it to compare projects’ final results to the final approved plans. It also stated that the template will require a brief description of significant changes between the initial and final approved project plans. If implemented as planned, the template could help NNSA to ensure that the DNN R&D and NPAC programs’ research and technology development projects are effectively and efficiently using resources, in keeping with federal internal control standards. We are sending copies of this report to the appropriate congressional committees; the Administrator of NNSA; the Secretaries of Defense, Energy, Homeland Security, and State; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have questions about this report, please contact me at (202) 512-3841 or oakleys@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. This report (1) evaluates the extent to which projects managed by the National Nuclear Security Administration’s (NNSA) research and technology development programs have resulted in advanced, transitioned, or deployed nonproliferation technologies, (2) evaluates how NNSA measures the performance of its research and technology development programs and projects, and (3) describes how programs in NNSA’s Office of Defense Nuclear Nonproliferation (DNN) decide which research and technology development projects to pursue. To determine the extent to which NNSA’s research and technology development programs have resulted in advanced, transitioned, or deployed nonproliferation technologies, we selected a nongeneralizable sample from research and technology development projects undertaken between fiscal years 2012 and 2015 by the two DNN programs responsible for such projects: the Office of DNN Research and Development (DNN R&D) and the Office of Nonproliferation and Arms Control (NPAC). As detailed below, we selected the projects in the sample based on funding level and project technical area of focus and status in order to cover a broad range of program activities. In total, the DNN R&D and NPAC programs provided information on 511 projects. Specifically, the DNN R&D program provided us with a list of 420 projects executed between fiscal years 2012 and 2015, and the two NPAC program areas that manage technology development projects— International Nuclear Safeguards (NPAC Safeguards) and Nuclear Verification (NPAC Verification)—provided us with a list of 91 projects that were executed between fiscal years 2012 and 2015. From this universe, we developed selection criteria to include projects from five of the major Department of Energy (DOE) and NNSA laboratories—Los Alamos National Laboratory, Oak Ridge National Laboratory, Sandia National Laboratories, Lawrence Livermore National Laboratory, and Pacific Northwest National Laboratory. Our selection criteria ensured variations in program areas and project outcomes. We generally limited our selection to the top 25 percent of projects in terms of total funding between fiscal years 2012 and 2015, and excluded all projects that were not led by one of the five national laboratories listed above. We also excluded projects in certain categories, such as projects under the program directors and university accounts. After these exclusions, 134 projects remained in the sample. The next step in our selection process was to group together similar projects, based on project status, focus area or type, age, and cost. We generally selected 2 projects from each group, to ensure that each combination of these characteristics was represented in our final sample. In cases in which a group contained only one project, we included only that project. For the DNN R&D program, we treated projects with the same laboratory, focus area, and status as a single group, resulting in 40 groups. For each group, we generally chose the 2 projects with the lowest project number (signifying the earliest start date), resulting in the inclusion of 63 projects. For the NPAC Safeguards program area, we treated projects with the same technology category (comparable to focus area) and project status as a single group, resulting in 13 groups. Because we did not have data to determine earliest start date, we generally chose the 2 projects with the highest total expenditures from each group, resulting in the inclusion of 17 projects. For the NPAC Verification program area, we treated projects with the same R&D category (comparable to focus area) and project status as a single group, resulting in 8 groups. Because we did not have data to determine earliest start date, we chose the 2 projects with the highest total expenditures from each group, resulting in the inclusion of 11 projects. This resulted in a final sample of 91 projects covering the DNN R&D and NPAC programs and all of their program and functional areas. Table 2, below, shows the number of projects we selected from each program and functional area. To determine the extent to which the 91 projects in our sample resulted in advanced, transitioned, or deployed technologies, we gathered project information from interviews and documentary sources. Specifically, we interviewed project officials, including the project manager, at the national laboratory that was the lead on the project, as well as interagency officials that received technologies transitioned from the DNN R&D and NPAC programs. Our interviews included officials from the Departments of Defense (DOD), Homeland Security, and State; the International Atomic Energy Agency (IAEA); and the Preparatory Commission for the Comprehensive Test Ban Treaty Organization. We also reviewed program and project data and documentation, including basic project data—such as project names and funding information—stored in DNN R&D’s WebPMIS database; program funding information; project plans; annual, quarterly, and final reports; and research presentations and publications. We reviewed the evidence to determine whether each project in our sample resulted in advanced, transitioned, or deployed technology or other outcomes. The outcomes that we identified are not mutually exclusive for any project. For example, one project could transition a technology to an end user and also result in a publication, or a project may have resulted in a research outcome that we did not identify because the outcome was not indicated in the evidence we collected. Finally, we reviewed documents that establish program goals, including strategic planning documents, as well as Standards for Internal Control in the Federal Government, which establishes federal standards for the use of information in achieving an entity’s objectives. To evaluate how NNSA measures the performance of its research and technology development programs and projects, we reviewed externally reported and internal program– and project-level measures that NNSA uses to assess performance in DNN’s research and technology development efforts. For program-level measures, we focused our review on the publicly reported measures developed and reported on in NNSA’s budget materials in response to the GPRA Modernization Act of 2010 (GPRAMA). Specifically, we reviewed the GPRAMA measures for the DNN R&D and NPAC programs that NNSA presented in its budget materials for fiscal years 2015 through 2017. We also reviewed classified and official use only materials that, according to NNSA officials, NNSA shares with the Office of Management and Budget (OMB) in their discussions of the GPRAMA measures. The materials we reviewed included technology roadmaps, which define the technology pathways that NNSA follows to address portfolio requirements and develop funding priorities, and final reports that NNSA presents to OMB on the completion of activities it undertakes in connection with the GPRAMA measures. We also reviewed DOE’s Annual Performance Reports for fiscal years 2014 and 2015. In addition, we reviewed documents and interviewed NNSA officials regarding internal measures that DNN R&D and NPAC use to monitor program performance, including independent project performance reviews. To further evaluate DNN’s project-level performance measures, we reviewed information on the 91 selected projects in our nongeneralizable sample. Specifically, we reviewed documents that establish baseline targets for projects’ scope, cost, and completion date and documents that contain information on project results. We also interviewed DNN officials to obtain their views on program and project management and performance. Finally, we reviewed documents that establish criteria for successful performance measures, including our past reports on the key attributes of such measures, as well as federal internal control standards. To describe how DNN decides which projects to pursue, we reviewed documents that establish the nonproliferation mission needs of DOE and interagency and IAEA stakeholders in nonproliferation-related research and technology development. The documents we reviewed include DOD’s 2010 Nuclear Posture Review Report; the Executive Office of the President’s National Science and Technology Council’s Nuclear Defense Research and Development Roadmap, Fiscal Years 2013-2017; the White House’s National Security Strategy of February 2015; DOE and NNSA strategic plans; the IAEA Department of Safeguards’ Long-Term R&D Plan, 2012-2023; and IAEA’s Development and Implementation Support Programme for Nuclear Verification 2016-2017. In addition, we reviewed documents that establish program requirements, such as the DNN R&D program’s Goals, Objectives, and Requirements documents; the DNN R&D and NPAC programs’ calls for project proposals, including the project selection criteria provided in the proposal calls; and project proposals. We also interviewed NNSA and other agency officials and project managers at the national laboratories to obtain their views on program priorities and the decision-making processes used to define broad program areas and select projects for funding. To assess the reliability of the data we analyzed on the projects in our nongeneralizable sample, we interviewed NNSA officials and reviewed documentation, and we determined that the data were sufficiently reliable for presenting the information contained in this report on NNSA project characteristics and funding. We conducted this performance audit from November 2015 to February 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Nuclear Detonation Detection (NDD) program area/Forensics-based Nuclear Detonation Detection functional area (5 projects) NDD program area/Ground-based Nuclear Detonation Detection functional area (8 projects) NDD program area/Space-based Nuclear Detonation Detection functional area (10 projects) DNN R&D NDD program area subtotal (23 projects) Proliferation Detection program area/Nuclear Weaponization and Material Production Detection functional area (17 projects) Proliferation Detection program area/Enabling Capabilities functional area (10 projects) Proliferation Detection program area/Nuclear Weapons and Material Security functional area (13 projects) DNN R&D Proliferation Detection program area subtotal (40 projects) DNN R&D subtotal (63 projects) Total (91 projects) In addition to the contact named above, William Hoehn (Assistant Director), Danny Baez, Antoinette C. Capaccio, Penney Harwell Caramia, Tara Congdon, John Delicath, Rob Grace, Ben Licht, Alexandria Palmer, Timothy M. Persons, Steven Putansu, Ron Schwenn, Kiki Theodoropoulos, Jack Wang, and Tonya Woodbury made key contributions to this report.
Nuclear proliferation is a national security threat. DNN's two research and technology development programs develop technical solutions to prevent nuclear proliferation. From fiscal years 2012 through 2015, these programs obligated over $1.1 billion on 511 projects. The DNN R&D program obligated over $1 billion of this amount. A House report included a provision for GAO to review the programs. This report evaluates, among other objectives, (1) the extent to which projects managed by NNSA's research and technology development programs resulted in advanced, transitioned, or deployed nonproliferation technologies and (2) how NNSA measures the performance of its research and technology development programs and projects. GAO reviewed documents on a non-generalizable sample of 91 research and technology development projects, selected from all program areas to include projects with the largest amount of funding and to capture a broad range of the programs' technical functions. GAO also reviewed DNN's program objectives and publicly reported performance measures and interviewed NNSA officials and technology users at other agencies. The full extent to which research and technology development projects managed by two programs in the National Nuclear Security Administration's (NNSA) Office of Defense Nuclear Nonproliferation (DNN) have resulted in advanced (progressed technologies or science supporting them), transitioned (provided to users for further development or deployment), or deployed (used in the field) technologies is unclear because NNSA does not consistently track and document all of these outcomes. Specifically, the DNN Research and Development (DNN R&D) and Nonproliferation and Arms Control (NPAC) programs track and document technology advancements resulting from their projects, such as in project reports. However, NNSA officials acknowledged that DNN R&D—by far the larger program—does not consistently track and document whether its projects result in technologies transitioned to or deployed by end users. In contrast, the NPAC program tracks and documents these project outcomes. By not consistently tracking and documenting technology transition and deployment outcomes, NNSA is unable to demonstrate the full results of its projects. GAO gathered information from various sources on a sample of 91 projects. Of these, 88 projects advanced technologies by, for example, building instrument hardware or developing models for data analysis; the other 3 did not advance technologies but assessed potential applications of existing technologies. Among the 88 projects that advanced technologies, 33 also resulted in technologies being transitioned, including software used to analyze nuclear detonations. Finally, of these 33 projects, 17 also resulted in deployed technologies, including an enrichment monitoring tool used in Iran and space-based nuclear detonation sensors. Reasons that some technologies in the sample did not move beyond the advancement or transition stage included that the technology needed further development or evaluation before being used. Limitations with the clarity of the DNN R&D program's publicly reported performance measures make them difficult to interpret, potentially hindering users' ability to determine the program's progress; better documentation of project performance against baseline targets may enhance NNSA's performance assessments. The DNN R&D program's performance measures have clarity limitations because they do not, for example, define measurement criteria or provide context justifying how the program determined that it met its performance targets; this may cause external users of the measures, including Congress, to have difficulty interpreting NNSA's assessment of performance. In a December 2011 report, GAO recommended that NNSA clarify its publicly reported measures for DNN's R&D program. GAO continues to believe that NNSA should do so. Regarding project-level performance, NNSA uses project plans to establish baseline targets for projects' scope and completion date, and tracks progress by communicating with project managers and reviewing documents such as quarterly and final project reports. However, NNSA officials said that final project reports do not document their assessment of performance against baseline targets and that there is no common template for final project reports, which GAO confirmed in a review of several final project reports. Documenting such assessments could enhance NNSA's ability to assess project performance against goals, consistent with federal internal control standards. GAO recommends that NNSA consistently track and document results of DNN R&D projects and document assessments of final project results against baseline performance targets. NNSA neither agreed nor disagreed with the first recommendation and partially agreed with the second, but agreed to take actions in response to both recommendations.
In Mexico, where, according to U.S. officials, the commercial sale or purchase of a firearm is prohibited and strict controls limit citizens’ access to firearms, illicitly trafficked firearms fuel drug trafficking violence, as we reported in 2009. According to data from ATF’s web-based firearms tracing system, eTrace, the majority of the guns seized and traced in Mexico—about 70 percent from 2010 through 2014—have origins in the United States. As we also reported in 2009, the Mexican government initiated a new national security strategy in 2006 to combat the growing power of criminal organizations and curb their ability to operate with impunity in certain areas of Mexico. The organizations countered government pressure with increased violence against law enforcement entities and the government’s efforts also appeared to result in increasing conflicts among criminal organizations over lucrative drug trafficking routes. The United Nations Office on Drugs and Crime has noted that the Mexican government’s shift in strategy also affected trafficking routes in Central America. As it became more hazardous for traffickers to ship drugs, particularly cocaine, directly to Mexico via air and waterways, an increasing share of the drug trade began to move overland through Central America. Those routes entered Mexico through its shared southern land border with Guatemala and, to a lesser extent, Belize. According to the United Nations Office on Drugs and Crime, this change in trafficking patterns resulted in increased competition for territorial control among local organized crime groups. Over time, Mexican criminal organizations also increasingly moved south into Central America to gain control of trafficking routes. As a result, violence increased substantially in countries throughout Central America. In recent years, the people of Central America and Mexico have cited violent crime as one of the most important issues facing their countries. Many of the criminal organizations involved in the drug trade also traffic firearms across the region. Although most of the firearms seized and traced in Mexico transited across its northern border with the United States, firearms also travel in both directions across Mexico’s southern border. It is difficult to ascertain the volume of firearms trafficked across Mexico’s border with Guatemala and Belize; however, according to the Mexican government, firearm seizure rates in Mexico’s southern border states are low in comparison to those of northern border states and the rest of the country. According to a binational assessment conducted by U.S. and Mexican officials, many Central American countries lack the capability to trace firearms independently, which makes it difficult to determine the percentage of weapons seized and traced in Mexico that have an origin in Central America. Also, according to ATF officials, unlike Mexico, both Guatemala and Belize allow for the commercial sale and purchase of firearms, so the availability of legal firearms differs considerably across the three countries. Guatemala and Belize are much smaller countries than Mexico, and ATF data on seized and traced firearms indicate that the volume of firearms seized and traced in these countries is also much smaller than that in Mexico. For example, from 2010 to 2014, Mexico seized and traced about 83,000 firearms, while Guatemala seized and traced about 7,000 firearms and Belize seized and traced about 300 firearms. A number of U.S. agencies provide capacity-building assistance in the three countries to help address concerns associated with firearms trafficking, among other things. State: State’s Bureau of International Narcotics and Law Enforcement Affairs (INL) manages most of the funding for the Merida Initiative and CARSI, the two primary initiatives through which the U.S. government funds and manages activities to help address the problem of increasing crime and violence in Mexico and Central America, respectively. In cooperation with several other U.S. agencies, State is also responsible for the overall implementation of these two initiatives. State’s Bureau of Political-Military Affairs, Office of Weapons Removal and Abatement (PM/WRA) works to reduce the harmful, worldwide effects of at-risk, illicitly proliferated, and indiscriminately used conventional weapons of war, including small arms and light weapons. PM/WRA supports programs around the world that assist governments in securing or destroying abandoned or stockpiled munitions, with a goal of curbing illicit trafficking. In most cases, State does not directly implement counter-firearms activities but instead provides funding for other U.S. agencies or other implementers, such as international organizations or nongovernmental organizations, to implement the activities. As the funding organization, State maintains responsibility for oversight of these efforts. Department of Justice: For over 45 years, ATF has implemented efforts to combat arms trafficking within the United States and from the United States to other countries as part of its mission under the Gun Control Act. ATF is responsible for investigating criminal and regulatory violations of federal firearms laws, among other responsibilities. ATF traces U.S. and foreign manufactured firearms for international, federal, state, and local law enforcement agencies, to link a firearm recovered in a criminal investigation to its first retail purchaser. It is the only entity within the U.S. government able to trace firearms recovered from crimes in Mexico. ATF has four offices in Mexico and an office in El Salvador that provides assistance throughout Central America, including in Guatemala and Belize. Through these offices, ATF provides an international liaison to support ATF’s mission to interdict and prevent illegal firearms trafficking and combat violent criminal gangs. ICITAP works with foreign governments to develop professional and transparent law enforcement institutions that protect human rights, combat corruption, and reduce the threat of transnational crime and terrorism. ICITAP provides a wide range of public safety development expertise, including assistance in areas such as organizational development, criminal investigations, and forensics. Department of Homeland Security: CBP coordinates and supports foreign initiatives, programs, and activities with its external partners around the world. CBP strives to protect U.S. borders by implementing programs and initiatives that promote antiterrorism, global border security, nonproliferation, export controls, immigration, and capacity building. For over 30 years, ICE—and previously the U.S. Customs Service— has implemented efforts to enforce U.S. export laws. ICE agents and other staff address a range of issues, including combating the illicit smuggling of money, people, drugs, and firearms. ICE has offices in Mexico and Guatemala whose missions are to support domestic operations by coordinating investigations with foreign counterparts, disrupt criminal efforts to smuggle people and materials into the United States, and build international partnerships through outreach and training. U.S. agencies and other implementing partners have undertaken a number of capacity-building activities that directly or indirectly support counter-firearms trafficking efforts in Belize, Guatemala, and Mexico. Figure 1 outlines the areas of effort under which these activities fall. As discussed below, these activities were selected based, in part, on the needs of each country. Some of these activities relate directly to firearms trafficking, such as ATF firearms identification training, while others broadly support antitrafficking or border security efforts that include efforts to stem the trafficking of firearms as one of many goals. For example, State has provided nonintrusive inspection equipment to Mexico that can be used to scan vehicles and containers for drugs or other contraband, including firearms. These activities are largely funded by State and implemented by other U.S. agencies or other implementing partners, including international organizations or nongovernmental organizations. The activities include the following: Firearms training: ATF has an attaché in Mexico and a regional attaché in El Salvador who supports activities throughout Central America, including in Guatemala and Belize. Among their responsibilities is managing a series of training courses for local officials. In fiscal year 2014, ATF’s Mexico office managed 27 courses that trained over 1,200 students. These training courses covered a number of topics related to firearms and explosives and included 10 firearms identification courses. In the same year, ATF’s Central America office managed 3 courses that trained nearly 200 students. All 3 courses covered eTrace and 1 also covered investigative techniques. Provision and support of eTrace: ATF helps partner country governments improve their use of the eTrace system. In 2009, ATF launched a Spanish-language version of the system, which is used in Guatemala and Mexico. Belizean officials use the English version of eTrace. ATF officials work with host country officials to ensure that they understand how to use the system and to encourage them to input all crime weapons seized within the country into the system for tracing. Stockpile management: From 2010 through 2012, PM/WRA provided a grant to the OAS to assist in the destruction of excess firearms and ammunition in Guatemala. In 2012, PM/WRA also provided funding for enhanced physical security of stockpile storage facilities in Belize. Firearms marking equipment and training: Under a PM/WRA grant, from 2009 through 2014, the OAS implemented a program to help facilitate the tracing process by distributing firearms marking machines throughout Latin America and the Caribbean, including in Guatemala and Belize. The program also included training on how to use the equipment. The program provided five machines to Guatemala and one machine to Belize, based on an assessment of the needs of each country. Forensic training and assistance: The United States initiated several efforts to enhance the capacity of forensics labs, including their ability to conduct ballistics work, in each of the three countries. In Mexico, INL and ICITAP have provided support at the national and state levels, through two separate programs, to help forensics labs meet international standards. In Central America, INL supported a regional program in forensics training. Additionally, INL officials noted that INL has provided bilateral support to the forensics labs in both Guatemala and Belize, including bringing a ballistics expert to Belize for a yearlong detail in its forensics lab. According to INL officials, INL intends to extend this detail by another year. Additionally, the United States has supported the use of the Integrated Ballistics Identification System (IBIS) in forensics labs in each country. IBIS is designed to capture, file, and compare images of bullets and cartridge casings. Investigators can use the system in examining crime-related guns. INL has provided equipment, training, or both in all three countries to initiate or enhance the use of IBIS and has encouraged the countries to link their systems to those of neighboring countries to enhance law enforcement capabilities throughout the region. Support for specialized units: In all three countries, INL and other U.S. agencies have supported antitrafficking law enforcement units. In Guatemala, ICE has supported the creation and sustainment of a Transnational Criminal Investigative Unit—a unit of local law enforcement officers who receive training and work closely with ICE agents in investigating transnational crimes, including firearms trafficking. With support from INL, ICE’s Mexico office also established two Transnational Criminal Investigative Units in 2015, according to U.S. officials. In Belize, INL and CBP have supported the creation and sustainment of a mobile interdiction team that combats trafficking of all illegal substances and materials, including firearms. In Guatemala, INL and ATF have also provided support to the Attorney General’s office, including to a firearms-specific group within its organized crime unit. For example, Guatemalan officials noted that ATF has provided technical guidance and expertise on firearms- related investigations. According to U.S. and Guatemalan officials, INL and ATF have also provided training and equipment to support the creation of a firearms and explosives unit within the Guatemalan National Police. Nonintrusive inspection equipment: In Mexico, INL has provided nonintrusive inspection equipment and training at border ports of entry and other strategic locations to allow the Mexican government to scan and inspect passenger vehicles, cargo containers, and freight rail for firearms, among other things. CBP has also provided training on the use of this equipment. Specialized canines: In Mexico, INL, in coordination with CBP, has provided specialized canines with the ability to detect smuggled firearms, among other things, and CBP has also provided training in their use. In Guatemala, an INL official noted that INL has provided support to a local canine training school that has outfitted government units with canines capable of detecting firearms and drugs, among other things. A number of other U.S. efforts may touch on combatting firearms trafficking without having this as an explicitly stated goal. For example, Belizean officials noted that support provided by the U.S. Coast Guard has helped them interdict firearms trafficked along the Caribbean coast. Additionally, the United States has provided justice sector support in Mexico that is intended to improve the judicial system’s ability to prosecute crimes of all types, including those related to firearms trafficking. Additionally, U.S. agencies, including CBP and ICE, provide assistance to each of the countries to strengthen border security. These efforts do not have a specific goal to counter firearms trafficking but complement antitrafficking efforts, according to Department of Homeland Security and State officials. In total, U.S. agencies obligated about $191 million in fiscal years 2010 through 2014 to support these efforts, most of which went to activities that were not specifically focused on countering firearms trafficking but included it as one of many goals (see fig. 2). State’s INL provided the majority—over 90 percent of total obligations reported by U.S. agencies— of this funding through the International Narcotics Control and Law Enforcement appropriations account. Additionally, about 93 percent of the total funding, or about $177 million, went to activities in Mexico. Most of this—about $149 million—is attributable to two activities in Mexico that are not specifically focused on countering firearms trafficking: (1) the provision of nonintrusive inspection equipment and training and (2) assistance to the Mexican federal and state forensic laboratories. In total, U.S. agencies obligated about $23 million for activities that specifically focused on countering firearms trafficking. This includes about $8 million from INL to support activities in all three countries, nearly $14 million that ATF provided in support of its activities in Mexico, and about $2 million that PM/WRA provided for some of the efforts in Guatemala and Belize through the Nonproliferation, Antiterrorism, Demining, and Related Programs (NADR) appropriations account. About $7.5 million of the overall total went to regional activities in Central America that included other countries in addition to Guatemala and Belize. Consistent with PPD 23, U.S. agencies considered key factors in selecting counter-firearms trafficking activities. In PPD 23—a directive governing U.S. security sector assistance, including firearms trafficking— the administration laid out policy guidelines for planning, implementing, and monitoring security sector assistance. PPD 23 asserts that U.S. agencies should consider several key factors in planning security sector assistance, including partner country needs, absorptive capacity, sustainability, and other donor and other U.S. efforts. U.S. counter-firearms trafficking capacity-building efforts in Belize, Guatemala, and Mexico have largely focused on identified partner country needs. We found that the issues of concern to the governments of Central American countries, such as Guatemala and Belize, differ in some ways from those of the government of Mexico, while other concerns exist in all three countries. Guatemala, for example, like some other Central American countries, has leftover stockpiles of weapons and ammunition from past conflicts, resulting in more arms than are needed for military and law enforcement purposes. Additionally, in both Guatemala and Belize, some stockpiles are aging—and are, therefore, more volatile—or are poorly secured. Firearms originating in or imported into Central America have often lacked markings that allow law enforcement to trace their origins. In all three countries, the ability to trace firearms has been a concern. Countries did not have systems for tracking the source of firearms and either lacked access to or did not regularly use ATF’s eTrace system. Additionally, capacity gaps in investigating firearms- related crimes and the porous borders between Belize, Guatemala, and Mexico have been concerns in all three countries. U.S. agency activities have largely focused on these needs. For example, agencies undertook stockpile management and firearms marking activities in both Guatemala and Belize and have initiated efforts to improve firearms tracing and investigations, as well as border security, in all three countries. U.S. and foreign government officials noted that they regularly meet to discuss how U.S. assistance can meet partner country needs. In Mexico, U.S. and Mexican officials instituted a practice under the current Mexican administration, through which all proposals for Merida Initiative assistance are discussed in regular meetings with representatives of a single office within the Mexican Interior Ministry. This provides a single contact through which all Merida Initiative assistance is coordinated and approved. In Guatemala and Belize, U.S. and host country officials described a process in which they meet to discuss needs on a regular basis. Our analysis of State and implementing partner documents shows that State and implementers considered absorptive capacity in selecting and designing programs. For example, in choosing not to recommend one proposal for a forensics program in Central America, INL noted that the proposal focused too heavily on advanced technological approaches, which may not be appropriately tailored to host country capabilities. In the interagency agreement between INL and ICITAP for a forensics program in Mexico, ICITAP said that it met with Mexican officials to discuss needs and assess the existing capacity of Mexican state labs in designing the program. Additionally, CBP noted that trainees for its nonintrusive inspection equipment training program would be selected based on their proficiency in using nonintrusive inspection equipment and on their work performance and developed a series of courses to build this expertise over time. Our analysis of documents and interviews with agency officials shows that agencies considered sustainability for counter-firearms trafficking activities, with some officials noting that ensuring sustainability can be challenging. Most of the interagency agreements we reviewed had a section on sustainability, indicating that U.S. agencies and other implementers considered how to create a sustainable program. In Mexico and Guatemala, U.S. officials noted that it can be difficult to sustain enhanced capacity because of regular turnover among host country officials. However, in some cases, U.S. agencies incorporated efforts to address this concern into their programs’ design. For example, in Mexico, agencies have instituted train-the-trainer programs in which they train host country officials who then train others within their organizations. The goal of such programs is to establish a level of expertise among some officials to help ensure long-term sustainability of U.S. training efforts. The long-term sustainability of equipment has been a concern in the firearms marking program in Central America. In particular, an OAS official stated that the marking equipment provided in Central America began requiring maintenance shortly into the program, but no funding had been allocated by State for that purpose. PM/WRA officials noted, however, that the memorandums of understanding established with recipient countries noted that maintenance was the countries’ responsibility. Foreign officials said that the United States is often the only donor supporting these types of counter-firearms trafficking efforts. Nonetheless, State officials said that they meet with other donors to ensure that efforts are not duplicative or conflicting. State also includes an assessment of other donors’ efforts in some strategic planning documents to ensure that planned U.S. efforts do not overlap with ongoing or planned programs by other donors. U.S. agencies indicated that they have generally considered other U.S. efforts in their planning. According to U.S. officials, coordination among U.S. government officials within each of the three countries is generally good. They noted that U.S. agencies within each country meet regularly and discuss ongoing activities to ensure that they are sharing information and coordinating efforts. In one case, a lack of communication between PM/WRA officials in Washington, D.C., and embassy officials in Belize resulted in delays to a program’s implementation. In 2012, PM/WRA provided $300,000 in obligated NADR funds to the U.S. Embassy in Belize for stockpile management, but less than one-third of it was expended by the embassy. State officials noted that embassy officials in Belize were unaware of the remaining obligated but unexpended funds until relevant officials informed them in spring 2015 that the funding needed to be spent or deobligated. PM/WRA officials noted that it is unusual for them to transfer funding directly to an embassy. They said they typically would provide the funding directly to a contractor or other implementing partner but provided it to the embassy in this case for the sake of expedience. Once they were aware of the unexpended NADR funds, embassy officials identified unmet needs for which these funds could be used and submitted a proposal to PM/WRA to provide additional security equipment to the Belizean government. U.S. agencies and other implementers established performance measures and targets for five of eight key activities we reviewed that assist in building capacity to combat firearms trafficking in Belize, Guatemala, and Mexico. According to Standards for Internal Control in the Federal Government, managers should compare actual performance against planned or expected results and analyze significant differences. In PPD 23 the administration also highlights the importance of monitoring and evaluating security sector assistance efforts to make resource allocation decisions. As we have previously reported, performance measurement allows organizations to track progress in achieving their goals and gives managers crucial information to identify gaps in program performance and plan any needed improvements. Table 1 outlines eight key counter-firearms trafficking activities in Belize, Guatemala, and Mexico—within the broader areas of effort presented in figure 1—and notes whether performance targets were established for each activity. Agencies and other implementers that established performance metrics and targets for these key activities did so as part of a broader performance management framework. They articulated an overall objective or goal for the activity and, to track progress toward these objectives and goals, developed performance metrics and targets that monitored specific actions that supported the objectives and goals. All of the performance measures and targets agencies and other implementers established for the key activities we reviewed were assessable. State guidance on establishing performance metrics outlines a variety of options for measuring success in various activities, such as training courses and advising or mentoring. The types of performance measures used for these activities varied in their degree of specificity. For example, one of the performance measures for CBP’s activity to train Mexican agency officials on the use of nonintrusive inspection equipment is that trainees will demonstrate proficiency through an evaluation of practical exercises. The firearms destruction activity in Guatemala, managed by the OAS, identified specific numbers for the amounts of expired or unstable ammunition and weapons the program intended to destroy. Some agencies established performance measures for some of the key activities we reviewed but did not establish targets for them. As a result, we were unable to assess whether these activities met their goals. For example, although ATF tracks data on firearms-related training and outcomes associated with eTrace in Belize, Guatemala, and Mexico, the agency has not established performance targets for these activities, making it difficult for ATF managers and other decision makers to determine whether its counter-firearms trafficking efforts are successful. ATF officials noted that measuring the overall success of capacity- building activities is difficult because the efforts rely on actions taken by other governments. Additionally, they noted that the activities they undertake may differ from year to year, depending on partner country needs. State also did not initially establish measures and targets for its physical security and stockpile management program in Belize, but has since done so for the use of the program’s remaining funding. Additional information about these activities follows. Firearms training (ATF): For firearms-related training, ATF tracks the number of training courses it manages in a fiscal year and the number of students who participate. ATF produces weekly activity reports for its efforts in Central America that recount law enforcement activities, referrals, and training or proposed training, among other things. ATF also collects student course evaluations and feedback provided by the students to the instructors. However, ATF has not established targets for these activities, such as the number or type of courses it plans to hold in a year. Doing so could help ATF leverage the existing feedback it collects, such as requests for longer courses or that ATF offer a second, more in-depth course on a given topic. Targets could also help to ensure that ATF is providing enough of the specific types of courses that it and partner countries determine to be most important for countering firearms trafficking activities. eTrace support and training (ATF): ATF tracks the number of seized weapons traced through eTrace in each country, as well as the number of eTrace training courses conducted. ATF also tracks referrals that result from traces performed with eTrace. According to ATF officials, referrals are firearms traces that lead to investigations in the United States. In the past, ATF and State established targets for expanding the use of Spanish-language eTrace in Mexico in country- wide strategic planning documents. However, ATF officials noted that these targets were specific to the years following the rollout of Spanish eTrace and are no longer used. ATF currently does not have any targets related to the use of eTrace in Mexico or Central America. State officials noted that without information against which it can measure progress, it can be difficult to determine whether the investment in promoting the use of eTrace in Central America is worthwhile. ATF officials stated that it can be difficult to set targets for eTrace referrals because it relies on host country officials inputting these data. However, this does not preclude it from establishing other targets related to eTrace, such as targets for expanding or consistently using eTrace. Tracking data against performance targets could potentially help ATF and State better understand the value of these efforts. Physical security and stockpile management (State): In 2012, in response to a weapons pilferage incident, the Department of Defense’s Defense Threat Reduction Agency reviewed the condition of Belize’s munitions storage facilities and made a number of equipment recommendations to bolster security. PM/WRA provided $300,000 in obligated NADR funds to the U.S. Embassy in Belize to implement these recommendations. As of September 2015, less than one-third of the $300,000 award had been expended and not all recommended equipment had been purchased. PM/WRA officials stated that they typically require implementers to develop performance measures and targets but could not provide evidence of whether targets were developed in this case. Officials stated that some of the funds were obligated but not expended because staff at the embassy were unclear about the remaining availability of funds following the initial procurement in 2012. The embassy developed a plan for expending the remainder of the funds in summer 2015 to avoid deobligation of the funds. The plan includes a clear target of procuring and installing a new storage container at one Belizean military site. As of the second quarter of fiscal year 2015, agency reports show that implementing agencies and organizations of the five activities that established performance targets had met or partially met their targets for all measures. Two of these activities are ongoing and three have been completed. Through interagency agreements or grant agreements, State requires the implementers it funds to submit quarterly reports on the progress of their activities. Our review of these assessments, as well as assessments completed by State, indicates that the two ongoing activities were meeting established goals for the firearms-related component of their training as of the second quarter of fiscal year 2015, as shown in table 2. These activities are part of the U.S. government’s efforts to increase the capacity of federal and state-level Mexican forensics laboratories, as described below. Forensic assistance to the Attorney General (ICITAP): ICITAP’s 2014 Interagency Agreement with INL outlines performance measures for its assistance to the Forensic Laboratory of Mexico’s Procuraduría General de la República, or Office of Attorney General (PGR). Each of the components that make up the project has a number of performance measures; however, we evaluated the firearms and tool marks component because of its direct relationship with firearms trafficking. As of the second quarter of fiscal year 2015, State assessed that this component was on target. The overall objective of the project is to achieve international accreditation for the PGR forensic laboratory in five core disciplines, including ballistics. One of the main goals for the firearms and tool marks component is compliance with international standards for testing laboratories. As of December 2015, according to ICITAP officials, ICITAP was preparing for an accreditation audit and projected that this component would be in compliance with international standards early in 2016. Initiated in 2010, the project made significant progress until the end of 2012, according to ICITAP. The project was slowed from December 2012 until May 2014, following the change in the Mexican presidential administration. Forensic assistance to Mexican states (ICITAP): ICITAP initiated this project providing assistance to Mexican states’ forensics laboratories in June 2013. ICITAP’s 2014 Interagency Agreement with INL outlines performance measures for this assistance. Each of the components that make up the project has a number of performance measures; however, we evaluated the firearms and tool marks component because of its direct relationship with firearms trafficking. State’s fiscal year 2015 second quarter and ICITAP’s fiscal year 2015 third quarter progress reports do not explicitly provide an assessment of the firearms and tool marks component. However, State officials noted that in their judgment, this project is meeting its goals for the firearms and tool marks component. The overall objective of the project is to enhance the capabilities of the forensic laboratory system at the state level to adhere to international standards for testing laboratories and use forensic-specific standards, where appropriate, as the basis for development, resulting in up to 10 fully accredited forensic laboratories—1 or 2 in each political region of the country. The planned life of this project is 4 years, following which ICITAP intends for all 32 state labs to have initiated the process to move toward accreditation. According to State’s and implementers’ progress reports, one of three completed activities for which performance measures and targets had been established fully met its goals. As shown in table 3, the OAS- managed firearms marking effort fully met its goals, and implementers partially met established goals for the other two activities. The progress made by agencies and implementers in achieving their established goals for these activities is described below. Nonintrusive inspection equipment training (CBP): CBP’s goals for this activity were partially met. As of September 2014, State assessed that this program was performing slightly below target. Although the project was meeting most of its benchmarks and timeline, it was unclear whether one of the project’s four performance targets would be met—incorporating successfully trained personnel into Mexico’s cadre of instructors. CBP’s 2013 Interagency Agreement with INL outlines performance measures for technical assistance and training on nonintrusive inspection equipment designed to interdict contraband, including firearms. One initial target was to train a cadre of 24 Mexican instructors; by the end of fiscal year 2014, CBP certified 13 instructors who planned to train 400 Mexican officials in the 2015 calendar year. State noted that the 13 instructors were well qualified with extensive experience in border management. However, as of the end of fiscal year 2014, it was unclear whether Mexican agencies would incorporate the successfully trained participants into their cadre of instructors. This project was initiated in April 2013 and ended in December 2014. Firearms marking (OAS): This OAS-managed regional project to promote firearms marking in Latin America and the Caribbean achieved all of its goals. The program was initiated in 2009 and completed in 2014 and included the participation of Guatemala and Belize. The goals of the project included creating a regional study of firearms marking laws and practices, distributing at least one firearm marking machine to every participating country, organizing a regional workshop on firearms marking, and organizing a roundtable on firearms marking. Guatemala was provided five marking machines— more than any other country in Central America—and Belize was provided one machine. According to State officials, Guatemala was provided more machines because of the high number of weapons in that country. In June 2014, the OAS reported that it had met all of the project’s goals. Stockpile Destruction (OAS): An OAS-implemented stockpile destruction program in Guatemala achieved one of its two goals. Initiated in September 2010, the project’s goals were to destroy 250 tons of expired or unstable ammunition and 12,000 small arms/light weapons belonging to the Ministry of Defense. By December 2011, about 269 tons of ammunition had been destroyed, exceeding the goal. In February 2012, the Guatemalan government announced its plans to create two new military brigades to counter illegal drug cartels, shifting the plans of the Ministry of Defense in relation to stockpile destruction and rendering the second objective unattainable. In total, 2,091 weapons were destroyed—representing 17 percent of the goal of destroying 12,000 weapons. State has a process for identifying and addressing challenges to achieving program goals, but these efforts were not consistently documented in implementing agency assessments of the key activities we reviewed, as required. State’s agreements with implementing agencies and organizations require that quarterly reports include a summary of any critical issues or challenges and a plan of action in response to them. According to State officials, an implementer may also communicate significant challenges to the State official overseeing its activity via conversations, meetings, and correspondence. For example, in Central America, agencies meet weekly to discuss program activities, which may include a discussion of any challenges. As a result, State officials noted that implementers may not include challenges in quarterly reports if they have been discussed in another venue. However, challenges and potential solutions should also be articulated in quarterly reports, according to State officials. We found that implementers’ progress reports on the key counter- firearms trafficking activities we reviewed were inconsistent in identifying key challenges or presenting strategies to address these challenges. For example, a quarterly report for the OAS-managed firearms destruction effort in Guatemala discussed a shift in host government priorities that resulted in the delay of the destruction of firearms belonging to one government agency. In response, implementing officials modified their focus to instead destroy weapons managed by another branch of the Guatemalan government. Although the goal was not fully satisfied, this quarterly report comprehensively documented the challenge, the strategy for resolving the challenge, and the final outcome. In other cases, quarterly reports we reviewed did not identify challenges or plans for addressing them. For example, ICITAP’s fiscal year 2015 third quarter report for its assistance to the PGR laboratories did not discuss any challenges in delivering the assistance. Although State assessed the firearms and tool marks component of this program as meeting its targets, it also noted that other components of the program were below target. However, the quarterly report did not discuss any challenges or reasons why these components were not on target. Additionally, a State assessment of the nonintrusive inspection equipment training effort indicated that the biggest challenge to the program’s success was securing support from host government agencies to continue to allow the program’s trainees to train host country officials as a group and to assess whether Mexican agencies would incorporate the training into their internal curricula. CBP’s fiscal year 2014 fourth quarter report mentioned a challenge associated with interagency communication; however, it did not identify concerns about the host country integrating the training into its curricula as a challenge. According to State officials, agency quarterly reports are important for understanding the progress of an activity toward meeting its goals. They noted that including a discussion of challenges in quarterly reports helps to ensure that all relevant stakeholders are aware of potential issues. State officials noted that balancing resources across competing priorities can be difficult and also said they use implementers’ quarterly progress reports to inform resource allocation decisions. For example, if an activity is not meeting its goals, State may look to end the program and allocate resources to a different priority. Alternatively, if an activity is particularly successful, State may look for opportunities to expand it. Without information about challenges and plans for their resolution, State is missing an opportunity to gain valuable knowledge that could help facilitate future decision making about efforts to counter firearms trafficking. Building capacity to counter firearms trafficking is a priority for the U.S. government and the governments of Belize, Guatemala, and Mexico. The United States has worked well with partner countries to implement a variety of capacity-building activities focused on partner country needs. U.S. agencies have established performance metrics and targets for most of the eight key counter-firearms trafficking efforts in Belize, Guatemala, and Mexico that we reviewed, but ATF has not set targets by which it can measure progress for its activities to provide firearms training and support the use of eTrace. Without such targets, ATF management and other decision makers may have difficulty evaluating the success of these efforts and ensuring that they are focusing on the most pressing needs. According to State’s and implementers’ reports, other U.S. agencies funded by State have made progress toward key activities to counter firearms trafficking, but they have not consistently reported on key challenges to meeting their goals or on the strategies they intended to use to address these challenges. Consistently identifying challenges and mitigation strategies in quarterly progress reports ensures that all relevant parties are aware of program risks and are able to make decisions with full information. Without this information, implementers may not fully meet their goals. Additionally, without this information, State, as the primary funder of these activities, is limited in its ability to ensure that it is able to maximize the use of U.S. resources. We recommend the following two actions to enhance U.S. agencies’ performance monitoring of counter-firearms trafficking activities: The Director of the Bureau of Alcohol, Tobacco, Firearms and Explosives should establish and document performance targets for the bureau’s key counter-firearms trafficking activities in Belize, Guatemala, and Mexico, as appropriate. The Secretary of State should work with other U.S. agencies and implementers to help ensure that quarterly progress reports identify key challenges and plans to address them. We provided a draft of this report to the Departments of State, Justice, and Homeland Security and OAS for comment. In its written comments, reproduced in appendix II, State generally concurred with our recommendation and stated that it will work with other agencies to implement it. The Department of Justice’s ATF provided its comments via email and stated that it concurred with our recommendation and plans to implement it. The Departments of Homeland Security and Justice also provided technical comments that we incorporated, as appropriate. OAS did not provide comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of State, the Secretary of Homeland Security, the Attorney General of the United States, the Director of the Bureau of Alcohol, Tobacco, Firearms and Explosives, and the OAS Secretary General. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6991 or farbj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. In this report, we examined (1) the activities undertaken by U.S. agencies to build partner capacity to combat firearms trafficking in Belize, Guatemala, and Mexico and the extent to which they considered key factors in selecting the activities and (2) the progress the United States has made in building partner capacity in Belize, Guatemala, and Mexico to combat firearms trafficking. To determine what activities U.S. agencies have undertaken and the extent to which U.S. agencies considered key factors in determining what activities to undertake to build partner capacity to combat firearms trafficking in Belize, Guatemala, and Mexico, we first identified U.S. activities that either directly or indirectly addressed firearms trafficking. We interviewed officials from the Departments of State (State), Justice, and Homeland Security—and their component agencies, as appropriate—and collected documentation on U.S. activities. We compiled a list of relevant activities based on these interviews, interviews with host country officials, and our review of documentation. We included an activity in the scope of our review if it specifically included firearms among the issues it was intended to address. Because other activities may also indirectly touch on firearms trafficking, the list may not be comprehensive. To understand broader strategies underlying these activities, we reviewed strategic planning documents specific to Belize, Guatemala, and Mexico, including State’s Integrated Country Strategies, as well as several interagency strategic and planning documents regarding U.S. engagement with Central America, including the U.S. Strategy for Engagement in Central America, the U.S. Strategy to Combat Transnational Organized Crime, and the National Southwest Border Counternarcotics Strategy. We collected funding data from State and the Department of Justice’s Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) for these activities for fiscal years 2010 through 2014 and, based on interviews with knowledgeable agency officials and a review of State’s relevant internal controls, determined that these data were sufficiently reliable for our purposes. We identified key factors for agencies to consider in Presidential Policy Directive 23, a presidential directive covering security sector assistance, including firearms trafficking. To determine whether agencies considered these factors, we reviewed agency documentation and interviewed U.S. and foreign officials. We interviewed U.S. officials and officials from the Organization of American States in Washington, D.C., and U.S. and Mexican officials in Mexico. Additionally, we conducted interviews with U.S. and host country officials in Guatemala and Belize and with the ATF Regional Attaché in San Salvador, who supports counter-firearms trafficking efforts in both Guatemala and Belize. To determine what progress the United States has made in building partner capacity in Belize, Guatemala, and Mexico to combat firearms trafficking, we identified eight key activities from the activities presented in the first objective. Our review of these eight key activities is not generalizable to all activities conducted by U.S. agencies and their implementing partners. We defined an activity as a key activity if (1) the activity had a specific component or objective to address firearms trafficking, (2) a substantial portion of funding was directed to the activity, or (3) U.S. or foreign officials identified the activity as a key effort to building partner capacity to combat firearms trafficking. For each key activity, we reviewed program documentation—including grant agreements, interagency agreements (where applicable), and progress reports—and interviewed agency officials to determine whether the funding and implementing agencies had developed performance measures and targets for the activity. If an agency had developed performance measures and targets, we reviewed State’s and implementers’ progress reports to determine whether the agency had met its goals for the program. We reviewed ongoing efforts and completed efforts separately since ongoing efforts would not be expected to have met all of their goals prior to the program’s completion. For both ongoing and completed efforts, we based our assessment of whether the activity was on track for meeting its goals on agency and State progress reports. We did not independently verify the data included in these reports. We reviewed State’s interagency agreements and grant agreements with implementing partners to determine what reporting requirements existed for State-funded programs. We reviewed agency and other implementers’ quarterly reports to determine whether the reporting requirements were met. We conducted this performance audit from February 2015 to January 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Juan Gobel (Assistant Director), Kara Marshall, Qahira El’Amin, Julia Jebo Grant, Lynn Cothern, Charles Michael Johnson, Jr., Ashley Alley, Oziel Trevino, and Justin Fisher made key contributions to this report.
Trafficking of illicit materials, including firearms, is widespread across Mexico's more than 700-mile southern border with Guatemala and Belize. Such trafficking presents a challenge for law enforcement in all three countries and for U.S. security interests. State and other U.S. agencies, such as ATF, have provided support to build the capacity of their counterparts in these three countries to address problems related to firearms trafficking. GAO was asked to review U.S. support to the governments of Belize, Guatemala, and Mexico to stem firearms trafficking across their shared border. This report examines, for these three countries, (1) the activities undertaken by U.S. agencies to build partner capacity to combat firearms trafficking and the extent to which they considered key factors in selecting the activities and (2) progress the United States has made in building such capacity. GAO analyzed program documentation and conducted interviews with U.S., Belizean, Guatemalan, and Mexican officials. To examine progress, GAO selected a nongeneralizable sample of eight key activities based on a number of factors, including whether the activity addressed firearms trafficking. U.S. agencies and their implementing partners have undertaken a number of capacity-building activities that support counter-firearms trafficking efforts in Belize, Guatemala, and Mexico. The figure below outlines examples of the areas of effort under which these activities fall. Selected, in part, based on partner country needs, these activities include efforts to provide support in using the Bureau of Alcohol, Tobacco, Firearms and Explosives' (ATF) web-based firearms tracing system (eTrace) and providing forensics training, including on ballistics. Some of these activities, such as firearms identification training, relate directly to firearms trafficking, while others broadly support antitrafficking or border security efforts for which stemming the trafficking of firearms is one of many goals. Presidential Policy Directive 23 highlights key factors, including partner country needs, absorptive capacity, sustainability, and other U.S. and other donor efforts, as important in planning security sector assistance. Agencies considered these factors in determining what activities to fund. U.S. agencies and implementing partners have achieved many of their goals for eight key counter-firearms trafficking activities GAO reviewed, but could enhance their efforts to measure and report on progress. Agencies and implementers established performance measures and targets for five of these eight activities. Standards for Internal Control in the Federal Government states that managers should compare actual performance against expected results, highlighting the importance of such measures and targets. ATF tracks its activities but has not established performance targets for them, without which it is difficult to measure the success of its efforts. Two of the five activities GAO reviewed that established metrics and targets are ongoing and are meeting their goals, while three other activities were completed and met or partially met their goals, according to Department of State (State) and implementer reports. For activities it funds, State requires implementers to include a discussion of key challenges and strategies to address those challenges in quarterly reports. However, implementers' reports for activities GAO reviewed were inconsistent and did not always identify challenges or strategies for addressing them. Without this information, agencies risk not fully meeting their goals and may be unable to maximize the use of U.S. resources. GAO recommends that (1) ATF establish and document performance targets for its key counter-firearms trafficking activities in Belize, Guatemala, and Mexico, as appropriate, and (2) State work with other U.S. agencies and implementers to help ensure that progress reports identify key challenges and plans to address them. ATF and State agreed with these recommendations.
There can be little doubt that we can—and must—get better outcomes from our weapon system investments. As seen in table 1, the value of these investments in recent years has been on the order of $1.5 trillion or more, making them a significant part of the federal discretionary budget. Large programs have an outsized impact on the aggregate portfolio. For example, Joint Strike Fighter costs have now consumed nearly a quarter of the entire portfolio. Yet, as indicated in table 1, 39 percent of programs have had unit cost growth of 25 percent or more. Recently, we have seen some modest improvements. For example, cost growth has declined between 2011 and 2012.programs have improved their buying power by finding efficiencies in We have also observed that a number of development or production, and requirements changes. On the other hand, cost and schedule growth remain significant when measured against programs’ first full estimates. The performance of some very large programs are no longer reflected in the latest data as they are no longer acquisition programs. For example, the Future Combat Systems program was canceled in 2009 after an investment of about $18 billion and the F- 22 Raptor program has completed aircraft procurement. In addition, the Ballistic Missile Defense System are not included in any of the analysis as those investments have proceeded without a baseline of original estimates, so the many difficulties experienced in the roughly $130 billion program are not quantifiable. The enormity of the investment in acquisitions of weapon systems and its role in making U.S. fighting forces capable, warrant continued attention and reform. The potential for savings and for better serving the warfighter argue against complacency. When one thinks of the weapon systems acquisition process, the image that comes to mind is that of the methodological procedure depicted on paper and in flow charts. DOD’s acquisition policy takes the perspective that the goal of acquisition is to obtain quality products that satisfy user needs at a fair and reasonable price. The sequence of events that comprise the process defined in policy reflects principles from disciplines such as systems engineering, as well as lessons learned, and past reforms. The body of work we have done on benchmarking best practices Recent, significant changes has also been reflected in acquisition policy.to the policy include those introduced by the Weapon Systems Acquisition Reform Act of 2009 and the department’s own “Better Buying Power” initiatives which, when fully implemented, should further strengthen practices that can lead to successful acquisitions. The policy provides a framework for developers of new weapons to gather knowledge that confirms that their technologies are mature, their designs are stable, and their production processes are in control. These steps are intended to ensure that a program will deliver the capabilities required utilizing the resources—cost, schedule, technology, and personnel—available. Successful product developers ensure a high level of knowledge is achieved at key junctures in development. We characterize these junctures as knowledge points. While there can be differences of opinion over some of the specifics of the process, I do not believe there is much debate about the soundness of the basic steps. It is a clear picture of “what to do.” Table 2 summarizes these steps and best practices, organized around three key knowledge points in a weapon system acquisition. Our work over the last year shows that, to the extent reforms like the Weapon Systems Acquisition Reform Act and DOD’s Better Buying Power initiatives are being implemented, they are having a positive effect on individual programs. For example, several programs we have reviewed are: making early trade-offs among cost, schedule, and technical developing more realistic cost and schedule estimates; increasing the amount of testing during development; and placing greater emphasis on reliability. These improvements do not yet signify a trend or suggest that a corner has been turned. The reforms themselves still face implementation challenges such as staffing and clarity of guidance and will doubtless need refining as experience is gained. We have made a number of recommendations on how DOD can improve implementation of the Weapon Systems Acquisition Reform Act. To a large extent, the improvements we have seen tend to result from external pressure exerted by higher level offices within DOD on individual programs. In other words, the reforms have not yet been institutionalized within the services. We still see employment of other practices—that are not prescribed in policy—such as concurrent testing and production, optimistic assumptions, and delayed testing. These are the same kinds of practices that perpetuate the unsatisfactory results that have persisted in acquisitions through the decades, such as significant cost growth and schedule delays. They share a common dynamic: moving forward with programs before the knowledge needed to make decisions is sufficient. We have reported that most programs still proceed through the critical design review without having a stable design, even though we have made a number of recommendations on the importance of this review and how to prepare for it. Also, programs proceed with operational testing before they are ready. Other programs are significantly at odds with the acquisition process. Among these I would number Ballistic Missile Defense System, Future Combat Systems (since canceled), Littoral Combat Ship, and airships. We recently reported on the Unmanned Carrier-Launched Airborne Surveillance and Strike program which proposes to complete the main acquisition steps of design, development, testing, manufacturing, and initial fielding before it formally enters the acquisition process. The fact that programs adopt practices that run counter to what policy and reform call for is evidence of the other pressures and incentives that significantly influence program practices and outcomes. I will turn to these next. An oft-cited quote of David Packard, former Deputy Secretary of Defense, is: “We all know what needs to be done. The question is why aren’t we doing it?” To that point, reforms have been aimed mainly at the “what” versus the “why.” They have championed sound management practices, such as realistic estimating, thorough testing, and accurate reporting. Today, these practices are well known. We need to consider that they mainly address the mechanisms of weapon acquisitions. Seen this way, the practices prescribed in policy are only partial remedies. The acquisition of weapons is much more complex than policy describes and involves very basic and strongly reinforced incentives to field weapons. Accordingly, rival practices, not normally viewed as good management techniques, comprise an effective stratagem for fielding a weapon because they reduce the risk that the program will be interrupted or called into question. I will now discuss several factors that illustrate the pressures that create incentives to deviate from sound acquisition management practices. The process of acquiring new weapons is (1) shaped by its different participants and (2) far more complex than the seemingly straightforward purchase of equipment to defeat an enemy threat. Collectively, as participants’ needs are translated into actions on weapon programs, the purpose of such programs transcends efficiently filling voids in military capability. Weapons have become integral to policy decisions, definitions of roles and functions, justifications of budget levels and shares, service reputations, influence of oversight organizations, defense spending in localities, the industrial base, and individual careers. Thus, the reasons “why” a weapon acquisition program is started are manifold and acquisitions do not merely provide technical solutions. While individual participants see their needs as rational and aligned with the national interest, collectively, these needs create incentives for pushing programs and encouraging undue optimism, parochialism, and other compromises of good judgment. Under these circumstances, persistent performance problems, cost growth, schedule slippage, and difficulties with production and field support cannot all be attributed to errors, lack of expertise, or unforeseeable events. Rather, a level of these problems is embedded as the undesirable, but apparently acceptable, consequence of the process. These problems persist not because they are overlooked or under-regulated, but because they enable more programs to survive and thus more needs to be met. The problems are not the fault of any single participant; they are the collective responsibility of all participants. Thus, the various pressures that accompany the reasons why a program is started can also affect and compromise the practices employed in its acquisition. I would like to highlight three characteristics about program funding that create incentives in decision making that can run counter to sound acquisition practices. First, there is an important difference between what investments in new products represent for a private firm and for DOD. In a private firm, a decision to invest in a new product, like a new car design, represents an expense. Company funds must be expended that will not provide a revenue return until the product is developed, produced, and sold. In DOD, new products, in the form of budget line items, can represent revenue. An agency may be able to justify a larger budget if it can win approval for more programs. Thus, weapon system programs can be viewed both as expenditures and revenue generators. Second, budgets to support major program commitments must be approved well ahead of when the information needed to support the decision to commit is available. Take, for example, a decision to start a new program scheduled for August 2016. Funding for that decision would have to be included in the fiscal year 2016 budget. This budget would be submitted to Congress in February 2015—18 months before the program decision review is actually held. DOD would have committed to the funding before the budget request went to Congress. It is likely that the requirements, technologies, and cost estimates for the new program— essential to successful execution—may not be very solid at the time of funding approval. Once the hard-fought budget debates put money on the table for a program, it is very hard to take it away later, when the actual program decision point is reached. Third, to the extent a program wins funding, the principles and practices it embodies are thus endorsed. So, if a program is funded despite having an unrealistic schedule or requirements, that decision reinforces those characteristics, not sound acquisition processes. Pressure to make exceptions for programs that do not measure up are rationalized in a number of ways: an urgent threat needs to be met; a production capability needs to be preserved; despite shortfalls, the new system is more capable than the one it is replacing; or the new system’s problems will be fixed in the future. It is the funding approvals that ultimately define acquisition policy. DOD has a unique relationship with the defense industry that differs from the commercial marketplace. The combination of a single buyer (DOD), a few very large prime contractors in each segment of the industry, and a limited number of weapon programs constitutes a structure for doing business that is altogether different from a classic free market. For instance, there is less competition, more regulation, and once a contract is awarded, the contractor has considerable power. Moreover, in the defense marketplace, the firm and the customer have jointly developed the product and, as we have reported previously, the closer the product comes to production the more the customer becomes invested and the less likely they are to walk away from that investment. While a defense firm and a military customer may share some of the same goals, important goals are different. Defense firms are accountable to their shareholders and can also build constituencies outside the direct business relationship between them and their customers. This relationship does not fit easily into a contract. J. Ronald Fox, author of Defense Acquisition Reform 1960-2009: An Elusive Goal, sums up the situation as follows. “Many defense acquisition problems are rooted in the mistaken belief that the defense industry and the government-industry relationship in defense acquisition fit naturally into the free enterprise model. Most Americans believe that the defense industry, as a part of private industry, is equipped to handle any kind of development or production program. They also by and large distrust government ‘interference’ in private enterprise. Government and industry defense managers often go to great lengths to preserve the myth that large defense programs are developed and produced through the free enterprise system.” But neither the defense industry nor defense programs are governed by the free market; “major defense acquisition programs rarely offer incentives resembling those of the commercial marketplace.” Dr. Fox also points out that in private industry, the program manager concept works well because the managers have genuine decision-making authority, years of training and experience, and understand the roles and tactics within government and industry. In contrast, Dr. Fox concludes that DOD program managers lack the training, experience, and stature of their private sector counterparts, and are influenced by others in their service, DOD, and Congress. managers indicated to us that the acquisition process does not enable them to succeed because it does not empower them to make decisions on whether the program is ready to proceed forward or even to make relatively small trade-offs between resources and requirements as unexpected problems are encountered. Program managers said that they are also not able to shift personnel resources to respond to changes affecting the program. Fox, Defense Acquisition Reform. for their position or forced into the near-term perspective of their tenures. In this environment, the effectiveness of management can rise and fall on the strength of individuals; accountability for long-term results is, at best, elusive. In my more than 30 years in the area, I do not know of a time or era when weapon system programs did not exhibit the same symptoms that they do today. Similarly, I do not subscribe to the view that the acquisition process is too rigid and cumbersome. Clearly, this could be the case if every acquisition followed the same process and strategy without exception. But they do not. We repeatedly report on programs approved to modify policy and follow their own process. DOD refers to this as tailoring, and we see plenty of it. At this point, we should build on existing reforms—not necessarily by revisiting the process itself but by augmenting it by tackling incentives. To do this, we need to look differently at the familiar outcomes of weapon systems acquisition—such as cost growth, schedule delays, large support burdens, and reduced buying power. Some of these undesirable outcomes are clearly due to honest mistakes and unforeseen obstacles. However, they also occur not because they are inadvertent but because they are encouraged by the incentive structure. I do not think it is sufficient to define the problem as an objective process that is broken. Rather, it is more accurate to view the problem as a sophisticated process whose consistent results are indicative of its being in equilibrium. The rules and policies are clear about what to do, but other incentives force compromises. The persistence of undesirable outcomes such as cost growth and schedule delays suggests that these are consequences that participants in the process have been willing to accept. Drawing on our extensive body of work in weapon systems acquisition, I have four areas of focus regarding where to go from here. These are not intended to be all-encompassing, but rather, practical places to start the hard work of realigning incentives with desired results. Reinforce desirable principles at the start of new programs: The principles and practices programs embrace are determined not by policy, but by decisions. These decisions involve more than the program at hand: they send signals on what is acceptable. If programs that do not abide by sound acquisition principles win funding, then seeds of poor outcomes are planted. The highest point of leverage is at the start of a new program. Decision makers must ensure that new programs exhibit desirable principles before they are approved and funded. Programs that present well-informed acquisition strategies with reasonable and incremental requirements and reasonable assumptions about available funding should be given credit for a good business case. As an example, the Presidential Helicopter, the Armored Multi Purpose Vehicle, the Enhanced Polar System, and the Ground Combat Vehicle are all acquisitions estimated to cost at least a billion dollars, in some cases several billions of dollars, and slated to start in 2014. These could be viewed as a “freshman” class of acquisitions. There is such a class every year, and it would be beneficial for DOD and Congress to assess them as a group to ensure that they embody the right principles and practices. Identify significant program risks upfront and resource them: Weapon acquisition programs by their nature involve risks, some much more than others. The desired state is not zero risk or elimination of all cost growth. But we can do better than we do now. The primary consequences of risk are often the need for additional time and money. Yet, when significant risks are taken, they are often taken under the guise that they are manageable and that risk mitigation plans are in place. In my experience, such plans do not set aside time and money to account for the risks taken. Yet in today’s climate, it is understandable—any sign of weakness in a program can doom its funding. This needs to change. If programs are to take significant risks, whether they are technical in nature or related to an accelerated schedule, these risks should be declared and the resource consequences acknowledged. Less risky options and potential off-ramps should be presented as alternatives. Decisions can then be made with full information, including decisions to accept the risks identified. If the risks are acknowledged and accepted by DOD and Congress, the program should be supported. More closely align budget decisions and program decisions: Because budget decisions are often made years ahead of program decisions, they depend on the promises and projections of program sponsors. Contentious budget battles create incentives for sponsors to be optimistic and make it hard to change course as projections fade in the face of information. This is not about bad actors; rather, optimism is a rational response to the way money flows to programs. Aside from these consequences, planning ahead to make sure money is available in the future is a sound practice. I am not sure there is an obvious remedy for this. But I believe ways to have budget decisions follow program decisions should be explored, without sacrificing the discipline of establishing long-term affordability. Attract, train, and retain acquisition staff and management: Dr. Fox’s book does an excellent job of laying out the flaws in the current ways DOD selects, trains, and provides a career path for program managers. I refer you to these, as they are sound criticisms. We must also think about supporting people below the program manager who are also instrumental to program outcomes, including engineers, contracting officers, cost analysts, testers, and logisticians. There have been initiatives to support these people, but they have not been consistent over time. The tenure for acquisition executives is a more challenging prospect in that they arguably are at the top of their profession and already expert. What can be done to keep good people in these jobs longer? I am not sure of the answer, but I believe part of the problem is that the contentious environment of acquisition grinds good people down at all levels. In top commercial firms, a new product development is launched with a strong team, corporate funding support, and a time frame of 5 to 6 years or less. In DOD, new weapon system developments can take twice as long, have turnover in key positions, and every year must contend for funding. This does not necessarily make for an attractive career. Mr. Chairman, this concludes my statement and I would be happy to answer any questions. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
DOD's acquisition of major weapon systems has been on GAO's high risk list since 1990. Over the past 50 years, Congress and DOD have continually explored ways to improve acquisition outcomes, including reforms that have championed sound management practices, such as realistic cost estimating, prototyping, and systems engineering. Too often, GAO reports on the same kinds of problems today that it did over 20 years ago. The topic of today's hearing is: "25 Years of Acquisition Reform: Where Do We Go From Here?" To that end, this testimony discusses (1) the performance of DOD's major defense acquisition program portfolio; (2) the management policies and processes currently in place to guide those acquisitions; (3) the incentives to deviate from otherwise sound acquisition practices; and (4) suggestions to temper these incentives. This statement draws from GAO's extensive body of work on DOD's acquisition of weapon systems. The Department of Defense (DOD) must get better outcomes from its weapon system investments, which in recent years have totaled around $1.5 trillion or more. Recently, there have been some improvements, owing in part to reforms. For example, cost growth declined between 2011 and 2012 and a number of programs also improved their buying power by finding efficiencies in development or production and requirements changes. Still, cost and schedule growth remain significant; 39 percent of fiscal 2012 programs have had unit cost growth of 25 percent or more. DOD's acquisition policy provides a methodological framework for developers to gather knowledge that confirms that their technologies are mature, their designs stable, and their production processes are in control. The Weapon Systems Acquisition Reform Act of 2009 and DOD's recent "Better Buying Power" initiatives introduced significant changes that, when fully implemented, should further strengthen practices that can lead to successful acquisitions. GAO has also made numerous recommendations to improve the acquisition process, based on its extensive work in the area. While recent reforms have benefited individual programs, it is premature to say there is a trend or a corner has been turned. The reforms still face implementation challenges and have not yet been institutionalized within the services. Reforms that focus on the methodological procedures of the acquisition process are only partial remedies because they do not address incentives to deviate from sound practices. Weapons acquisition is a complicated enterprise, complete with unintended incentives that encourage moving programs forward by delaying testing and employing other problematic practices. These incentives stem from several factors. For example, the different participants in the acquisition process impose conflicting demands on weapon programs so that their purpose transcends just filling voids in military capability. Also, the budget process forces funding decisions to be made well in advance of program decisions, which encourages undue optimism about program risks and costs. Finally, DOD program managers' short tenures and limitations in experience and training can foster a short-term focus and put them at a disadvantage with their industry counterparts. Drawing on its extensive body of work in weapon systems acquisition, GAO sees several areas of focus regarding where to go from here: at the start of new programs, using funding decisions to reinforce desirable principles such as well-informed acquisition strategies; identifying significant risks up front and resourcing them; exploring ways to align budget decisions and program decisions more closely; and attracting, training, and retaining acquisition staff and managers so that they are both empowered and accountable for program outcomes. These areas are not intended to be all-encompassing, but rather, practical places to start the hard work of realigning incentives with desired results.
Several offices within FAA’s Air Traffic Organization and Office of Regulation and Certification have responsibility for approving ground systems and certifying aircraft equipment, as shown in figure 1. Before the creation of the Air Traffic Organization in November 2003, FAA’s Research and Acquisitions (acquisitions office) and Air Traffic Services were the primary offices responsible for approving ground systems for safe use in the national airspace system. The 5 systems that we reviewed began the approval process under that structure. Currently, these offices, although renamed, form the core of the Air Traffic Organization. The responsibilities of Air Traffic Services are now distributed among several offices, including System Operations Services and Terminal Services. The responsibilities of Research and Acquisitions are distributed among several offices, including Technical Operations Services and En Route and Oceanic Services. In addition, the Air Traffic Organization includes Safety Services, which is its focal point for safety, quality assurance, and quality control and is the primary interface with FAA’s Office of Regulation and Certification. FAA’s Office of Regulation and Certification has responsibility for certifying and regulating aircraft and its equipment. The following 3 offices within the Office of Regulation and Certification are involved in the certification of aircraft equipment: Aircraft Certification Service (aircraft certification office) is responsible for administering safety standards for aircraft and aircraft equipment that are manufactured in the United States. Flight Standards Service is responsible for granting operational approval to air carriers that plan to use equipment on their aircraft. Air Traffic Safety Oversight Service is responsible for monitoring the safety of air traffic operations through the establishment, approval, and acceptance of safety standards and the monitoring of safety performance and trends. It will also improve coordination between the Office of Regulation and Certification and the Air Traffic Organization. In addition to the internal FAA stakeholders, the approval of air traffic control (ATC) systems can also involve a number of other external stakeholders. FAA generally makes the decision about which other stakeholders will be involved in approving ATC systems for safe use in the national airspace system. For example, stakeholders involved in approving ATC systems may include manufacturers of aircraft equipment; users, such as controllers and maintenance technicians. FAA also regularly requests RTCA, a private, not-for-profit corporation, to develop consensus-based performance standards for the aircraft equipment component of ATC systems. RTCA functions as a federal advisory committee that provides recommendations used by FAA as the basis for policy, program, and regulatory decisions and by the private sector as the basis for development, investment, and other business decisions. In this report, we focus on the approval of the 5 ATC systems described in table 1 and further discussed in appendixes II through VI. FAA has separate processes for approving ground systems and certifying aircraft equipment for safe use in the national airspace system. FAA’s process for approving ground systems, such as radar systems, is done in accordance with policies and procedures in FAA’s Acquisition Management System. This process involves a determination by FAA’s Air Traffic Organization regarding whether a vendor is in compliance with contract requirements and/or FAA operational requirements, followed by a rigorous test-and-evaluation process to ensure that the new system will operate safely in the national airspace system. In contrast, the process for certifying aircraft equipment, which is usually developed by private companies, is done in accordance with Federal Aviation Regulations, with FAA serving as the regulator. If an ATC system has both a ground system and aircraft equipment, as was the case for 3 of the 5 systems we reviewed, then the system must go through both processes before it is approved for safe use in the national airspace system. The approval of a ground system focuses on safety and is done in accordance with FAA contract documents and policies and procedures that are part of the agency’s Acquisition Management System. Most ground systems that provide air traffic services and air navigation services are developed, owned, and operated by FAA. Prior to November 2003, FAA’s Research and Acquisitions and Air Traffic Service offices were responsible for the approval of ground systems. Currently, FAA’s Air Traffic Organization has primary responsibility for the approval of ground systems. FAA’s ground system approval process includes the following six phases—concept of operations, requirements setting, design and development, test and evaluation, operational readiness, commissioning— and involves various stakeholders, which are also noted below. Concept of operations: The ground system approval process begins with the concept of operations phase. If the system being developed has both a ground system and aircraft equipment, FAA’s Office of Regulation and Certification, Air Traffic Services Office, and Acquisitions Office may work together to develop the concept of operations. During this phase, FAA generally identifies and defines a service or capability to meet a particular need in the national airspace system and may involve other stakeholders, such as air traffic controllers. FAA also defines the roles and responsibilities of key participants, such as controllers and maintenance technicians, and the key elements of the required capability. The concept of operations phase is not a static process. As FAA obtains more information about the system it develops, the concept is revised to reflect the new information even though the next phase of the process may have already begun. Potential stakeholders in this phase include FAA’s Office of Regulation and Certification, FAA’s Air Traffic Organization, aircraft manufacturers, aviation industry associations, airlines, air traffic controllers, maintenance technicians, manufacturers of aircraft equipment, ground system developers, and representatives of general aviation. Requirements setting: During the requirements-setting phase, FAA establishes a minimum set of requirements, including safety objectives, and specifies how well the new system must perform its intended functions. For example, it was during this phase that FAA established WAAS’ and LAAS’ integrity requirement—which is that the system cannot fail to warn pilots of misleading information that could potentially create hazardous situations more than once in 10 million approaches. After analyzing the initial requirements and comparing the cost, benefits, schedule, and risk of various solutions, FAA sets final requirements and presents them to the Joint Resources Council as part of the investment plan. After the council has approved the requirements for the new system, FAA will issue a request for proposals, evaluate the offers received, and select a contractor to design a system based on the requirements set by FAA. Potential stakeholders in this phase include FAA’s Office of Regulation and Certification, FAA’s Air Traffic Organization, aircraft manufacturers, aviation industry associations, airlines, air traffic controllers, maintenance technicians, manufacturers of aircraft equipment, ground system developers, and representatives of general aviation. Design and development: The design and development of ground systems is generally completed by a contractor and monitored by FAA. During this phase, the contractor conducts preliminary and critical design reviews, which include plans for how it will conduct the testing phase. FAA must approve these plans before the contractor can proceed to the next phase. Potential stakeholders in this phase include FAA, ground system developers, air traffic controllers, and maintenance technicians. Test and evaluation: After FAA has approved the design and development of the system, it is ready to be tested and evaluated. The testing and evaluation of ground systems typically includes three major tests: development tests, operational tests, and an independent operational test and evaluation. Development testing is performed by the contractor to verify compliance with contractual requirements and is overseen by FAA. Operational testing is performed by FAA and is designed to demonstrate that a new system is operationally effective and suitable for use in the national airspace system. An independent operational test and evaluation is a full system-level evaluation conducted by FAA in an operational environment to confirm the operational readiness of a system to be part of the national airspace system. Potential stakeholders in this phase include FAA, ground system developers, air traffic controllers, and maintenance technicians. Operational readiness: During the operational readiness phase, FAA personnel are trained to operate and maintain the new system, usually in conjunction with its predecessor system. Following operational readiness approval, the system is ready to be commissioned. Potential stakeholders in this phase include FAA, ground system developers, air traffic controllers, and maintenance technicians. Commissioning: The commissioning phase ensures that the new ground system as installed meets the intended mission and operational requirements and is fully supported by the national airspace system infrastructure. Potential stakeholders in this phase include FAA, ground system developers, air traffic controllers, and maintenance technicians. In contrast to the ground system approval process, certification of aircraft equipment is done in accordance with procedures outlined in the Federal Aviation Regulations, Title 14, Code of Federal Regulations, Part 21. Under Title 49, Section 44704, of the U.S. Code, FAA has the authority to issue type certificates, supplemental type certificates, and production certificates, among others, for aircraft and equipment that will be used in the national airspace system. Unlike the approval of ground systems, which FAA accomplishes with the help of a contractor, FAA is the regulator of aircraft equipment and is not typically involved in the development of the equipment. An applicant, such as a manufacturer of aircraft equipment, generally brings fully developed aircraft equipment to FAA for certification. The aircraft equipment certification process includes the following five phases—concept of operations, requirements setting, design and production approval, installation approval, and operational approval—and involves several stakeholders, which are also noted below: Concept of operations: Like the ground system approval process, the aircraft equipment certification process generally begins with the concept of operations phase, when the aircraft equipment is part of an ATC system. If the aircraft equipment certification process is not associated with the approval of a new ground system, then the certification process may begin with an idea for better equipment. During this phase, FAA, sometimes with the help of industry, identifies and defines a service or capability to meet a particular need in the national airspace system. Potential stakeholders in this phase include FAA’s Office of Regulation and Certification, FAA’s Air Traffic Organization, aircraft manufacturers, aviation industry associations, airlines, air traffic controllers, maintenance technicians, manufacturers of aircraft equipment, ground system developers, and representatives of general aviation. Requirements setting: Once FAA has identified the need for a new system with aircraft equipment, FAA determines the requirements for the aircraft equipment. In some cases, the requirements for aircraft equipment may already exist in the Federal Aviation Regulations. In other cases, FAA may ask RTCA to develop the requirements, including safety requirements, which are referred to as minimum operating performance standards. RTCA typically takes 1 to 5 years to develop the standards because of the need to reach consensus between FAA and the industry and the increasing complexity of systems being developed today. According to a RTCA official, the time required to develop recommended standards is a function of many variables, including urgency of the situation and the commitment and availability of government and industry volunteers to collaboratively develop the standards. For example, in the case of WAAS, RTCA began setting performance standards in 1994, completed the original version of the standards in January 1996, and completed the most recent version of WAAS performance standards in November 2001. Potential stakeholders in this phase include FAA’s Office of Regulation and Certification, FAA’s Air Traffic Organization, aircraft manufacturers, aviation industry associations, airlines, air traffic controllers, maintenance technicians, manufacturers of aircraft equipment, ground system developers, and representatives of general aviation. Design and production approval: The requirements/performance standards, most often developed by RTCA, typically form the basis for a technical standard order, which FAA uses to grant design and production approval for most new aircraft equipment developed in support of national airspace system modernization efforts. Technical standard orders are FAA’s requirements for materials, parts, processes, and appliances used on civil aircraft. Most aircraft manufacturers want technical standard orders because they make installation approval simpler and less costly and allow for operation in any type of aircraft. Technical standard orders are issued for items ranging from safety belts to navigation equipment. If the applicant successfully completes the design and production approval phase, FAA provides the applicant with a technical standard order authorization letter, which states that the applicant has met a specific technical standard order and the product is now ready for the installation approval phase. Potential stakeholders in this phase include FAA’s Aircraft Certification Service, manufacturers of aircraft equipment, and aircraft manufacturers. Installation approval: After receiving a technical standard order authorization for new aircraft equipment, the initial applicant must receive installation approval from FAA before the aircraft equipment may be used in the national airspace system. To receive installation approval, the applicant submits a certification plan and test plan to one of FAA’s aircraft certification offices for review and approval. In addition, the applicant conducts ground and flight tests under FAA’s supervision to ensure that the new equipment operates properly upon installation. Once the tests are completed to FAA’s satisfaction, FAA issues a supplemental type certificate, which is evidence of FAA’s approval to modify an aircraft from its original design. Potential stakeholders in this phase include FAA’s Aircraft Certification Service, manufacturers of aircraft equipment, and aircraft manufacturers. Operational approval: Finally, for the aircraft equipment to become certified for use in the national airspace system by air carrier operators, operational approval is also needed from FAA. To obtain operational approval, the applicant must successfully demonstrate, among other things, that the pilots are properly trained to use the aircraft equipment and that maintenance personnel are properly trained to maintain the equipment. Potential stakeholders in this phase include FAA’s Flight Standards Service, airlines, and representatives of general aviation. FAA faced challenges in approving systems for safe use in the national airspace system that contributed to cost growth, delays, and performance shortfalls in deploying these systems. We identified three specific challenges through the review of 5 ATC systems and our past work. These challenges are the need to involve appropriate stakeholders, such as users and technical experts, throughout the approval process; ensure that the FAA offices that have responsibility for approving ground systems and certifying aircraft equipment effectively coordinate their efforts for integrated systems; and accurately estimate the amount of time needed to meet complex technical requirements at the beginning of the design and development phase. Although most of the challenges we found relate to the ground system approval process, RTCA and the Aerospace Commission have identified challenges with FAA’s aircraft equipment certification process. For example, RTCA found that there was a need for better internal FAA communication and coordination, including the establishment of an organizational focal point to provide coordinated responses to all matters related to ground systems and aircraft equipment. In addition, the Aerospace Commission found that FAA’s regulatory process needs to be streamlined to enable the timely development of regulations needed to address new technologies. FAA failed to adequately involve appropriate stakeholders, such as air traffic controllers and maintenance technicians, for 3 of the 5 systems we reviewed. For example, FAA did not adequately involve controllers and maintenance technicians throughout the approval process of STARS, which will replace controller workstations with new color displays, processors, and computer software. Although controllers and technicians were involved in developing requirements for STARS in 1994 prior to the 1996 contract award to Raytheon, the original approved acquisition plan provided for only limited human factors evaluation by controllers and technicians during STARS’ design and development because the aggressive development schedule limited the amount of time available to involve them. Consequently, FAA and Raytheon had to restructure the contract to address controllers’ concerns that were identified later, such as the inconsistency of visual warning alarms and color codes with the new system. According to FAA officials, not involving controllers and maintenance technicians in the design phase caused the agency to revise its strategy for acquiring and approving STARS, which contributed to STARS’ overall cost growth of $500 million and added 3 years to the schedule. FAA also did not always sufficiently involve technical experts early in its approval process for 2 additional systems that we reviewed. For example, FAA did not obtain technical expertise on how to resolve the integrity requirement of WAAS, a navigation system for aviation that augments the Global Positioning System (GPS), until late in the design and development phase. FAA acknowledges that the agency’s in-house technical expertise was not sufficient to address the technical challenges of WAAS. Initially, FAA and the contractor believed they could meet the WAAS integrity requirement to alert the pilot in a timely manner when the system should not be used. However, although WAAS was being developed by an integrated product team that included representatives from several FAA offices, the team did not function effectively in resolving issues related to meeting an important functional requirement to alert the pilot in a timely manner when the system should not be used because of a possible error. According to FAA officials, the reason coordination did not occur was that the two offices had competing priorities that were not associated with WAAS’ development. Consequently, in 2000, FAA convened the WAAS Integrity Performance Panel to help it meet the integrity requirement. The WAAS Integrity Panel worked for about 2-1/2 years before it came up with a solution to the integrity requirement. In addition, in August 2000, the agency established an Independent Review Board, which is independent of the panel and included experts in satellite navigation and safety certification, to oversee the panel and evaluate the soundness of its efforts. According to a member of the WAAS Integrity Panel, if FAA had involved these technical groups immediately after the contract was awarded to Raytheon in 1996, these groups could have started devising a solution in 1996, rather than in 2000. This lack of technical expertise contributed to a 6-year delay in WAAS’ commissioning and a $1.5 billion increase in its development costs from the 1994 baseline. FAA also did not fully engage technical experts early in the approval process of LAAS, a precision approach and landing system that will augment GPS. According to FAA officials, meeting the LAAS integrity requirement to alert the pilot in a timely manner when the system should not be used is perhaps the most difficult part of approving this system for safe use in the national airspace system. According to the Department of Transportation’s Inspector General, although FAA had a LAAS Integrity Panel in place since 1996 to assist with its research and development activities, the panel was not formally tasked with resolving LAAS’ integrity issues. According to one satellite navigation expert and the Department of Transportation’s Inspector General, focusing the LAAS Integrity Panel on resolving the integrity requirement early in the approval process may have enabled FAA to develop a quicker solution. In 2003, FAA focused the LAAS Integrity Panel on developing a solution to meet the integrity requirement. However, FAA and another satellite expert maintain that the technical complexity of this problem is the main reason that LAAS is not commissioned. According to FAA officials, the need to validate integrity requirements and further software development has resulted in FAA placing LAAS in its research and development program and suspending funding for fiscal year 2005. In contrast, FAA faced fewer schedule and cost problems in approving ASDE-X for use in the national airspace system. This was, in part, because FAA included stakeholders early and throughout the approval process and because program managers had strong technical expertise. The ASDE-X program office brought in stakeholders, including maintenance technicians and air traffic controllers, during the concept of operations phase and continued to involve them during requirements setting, design and development, and test and evaluation. FAA also brought ASDE-X stakeholders together at technical meetings to provide input on ASDE-X design and development, which allowed the ASDE-X program office to design a system that met requirements and incorporated stakeholders’ needs. By obtaining the input of controllers and technicians at the beginning of the approval process, FAA was able to ensure that ASDE-X requirements were set at appropriate levels and not overspecified or underspecified. Some stakeholders commented that the program managers’ strong technical expertise was one reason that ASDE-X’s requirements were set appropriately. As a result, this system was initially commissioned only 5 months behind schedule and its cost increased moderately from $424 million to $510 million. FAA did not always effectively coordinate its certification and approval processes for CPDLC, WAAS, and LAAS. Coordination between FAA’s offices responsible for approval of ground systems and certification of aircraft equipment is becoming increasingly important given that more and more ATC systems have both ground systems and aircraft equipment. However, we found that coordination was not effective on CPDLC Build 1A, which allows pilots and controllers to transmit digital data messages directly between FAA ground automation systems and suitably equipped aircraft. In the interest of meeting the original cost and schedule estimates, FAA awarded the contract before it had a full understanding of system requirements. Requirements that specify how the ground system and aircraft equipment would operate together were not yet completed prior to award of the Build 1A contract. Consequently, changes needed to be made after the contract was awarded. New hardware requirements, software requirements, and other system requirement changes were added, which increased CPDLC’s costs by $41 million, almost 61 percent of the total cost increases associated with CPDLC. The lack of effective coordination among FAA offices responsible for approving WAAS also contributed to delays and increased costs in commissioning WAAS. Although WAAS was being developed by an integrated product team that included representatives from various FAA offices, the team did not function effectively in resolving issues related to meeting an important functional requirement to alert the pilot in a timely manner when the system should not be used because of a possible error. According to FAA officials, the reason coordination was not effective was because the two offices had competing priorities that were not associated with development of WAAS. Consequently, it was not until September 1999, when the aircraft certification office became fully involved, that FAA recognized that its solution to meet WAAS’ integrity requirement was not sufficient and that it did not have the technical expertise needed to develop a solution. This lack of coordination contributed to a 6-year delay in WAAS’ commissioning and a $1.5 billion increase in its development costs. LAAS is another example of how FAA did not effectively coordinate its efforts. For example, FAA’s Office of Regulation and Certification completed the design and production approval of LAAS aircraft equipment without effectively coordinating with the offices responsible for acquisition to determine the consequences of certifying aircraft equipment before approval of the associated ground system. According to an FAA official, once the Office of Regulation and Certification has given design and production approval to the LAAS aircraft equipment, it is not possible to make a change to the requirements for the aircraft equipment so that they are better integrated with the associated LAAS ground system. Consequently, LAAS ground system developers may have to make more costly and time-consuming changes to the ground system than would have been necessary if the Office of Regulation and Certification and acquisitions offices had coordinated their efforts. We have reported in the past that when FAA attempts to combine different phases of system development in an effort to more quickly implement the systems to meet milestones, it repeatedly experiences major performance shortfalls and rework, which leads to schedule delays and cost increases. We found that WAAS, STARS, and LAAS all experienced delays and cost increases in part because FAA did not prepare accurate estimates of the amount of time needed to meet complex technical requirements, leading to an accelerated schedule that sometimes failed to include activities such as human factors evaluations and technical expert consultations. For example, in 1994, in response to the concerns of government and aviation groups, FAA accelerated implementation of WAAS milestones from 2000 to 1997. FAA planned to develop, test, and deploy WAAS within 28 months, an unrealistic goal given that software development alone was expected to take 24 to 28 months. It was not until July 2003, over 6 years later, that FAA was able to commission WAAS for initial operating capability. The accelerated schedule contributed to the 6-year delay in the commissioning of the system because the schedule itself was unrealistic and additional design work needed to be completed. During that time, the cost to develop the system increased about $1.5 billion, and the system has yet to meet its original performance goal of providing pilots with the ability to navigate down to 200 feet during their approach to the runway. FAA also accelerated the schedule for STARS in 1995. FAA’s approach to commissioning STARS was oriented to rapid deployment to meet critical needs for new equipment. To meet these needs, FAA compressed its original development and testing schedule from 32 to 25 months. Consequently, this acceleration in schedule left only limited time for human factors evaluations and, according to FAA officials, contributed to STARS’ overall cost growth of $500 million and added 3 years to the first deployment because the agency had to revise its strategy for acquiring and approving STARS. Although FAA had not developed a solution for meeting the integrity requirement, FAA also accelerated the LAAS schedule in 1999 by setting system milestones before completely designing the system. FAA originally planned to deploy LAAS in 2002 but has since moved it to fiscal year 2009 because the system’s software development is not complete and a solution for meeting LAAS’ integrity requirements has yet to be developed. RTCA and the Aerospace Commission also identified challenges with FAA’s process for approving ground systems and certifying aircraft equipment. In 1998, at the request of the FAA Administrator, RTCA reviewed FAA’s certification/approval process to determine if it could be made more responsive to the changing state of aviation, including its more integrated technologies. RTCA found that FAA’s ground system approval process and aircraft equipment certification process took too long and cost too much, and RTCA made several recommendations to improve the processes. For example, in 2001, RTCA recommended that FAA implement a coordinated approval process that, among other things, would ensure that all stakeholders, including those outside FAA’s program offices, participate in all phases of the approval process. Specifically, similar to our finding that the FAA offices that had responsibility for approving ground systems and certifying aircraft equipment did not always effectively coordinate their efforts, RTCA found that there was a need for better internal FAA communication and coordination, including the establishment of an organizational focal point to provide coordinated responses to all matters related to ground systems and aircraft equipment. RTCA also found that there was a need for an earlier and better exchange of information between FAA and those involved in the approval and certification processes from outside FAA, such as manufacturers of aircraft equipment. In 2000, Congress asked the Commission on the Future of the U.S. Aerospace Industry to study the health of the aerospace industry and identify actions that the United States needs to take to ensure the industry’s health. As part of this study, the Aerospace Commission reviewed FAA’s certification process for aircraft equipment and made recommendations. The Aerospace Commission found that FAA’s certification of new aircraft technologies has become uncertain in terms of time and cost and recommended that FAA’s regulatory process be streamlined to enable the timely development of regulations needed to address new technologies. According to the Aerospace Commission, instead of focusing on rules and regulations that dictate the design and approval of equipment, FAA should focus on certifying that manufacturing organizations have safety built into their processes for designing, testing, and ensuring the performance of an overall system. The commission believed that such an approach would allow FAA personnel to better keep up with technological progress by becoming less design-specific and more safety-focused. FAA has taken action to address two of the three management challenges that we identified. However, FAA has not taken action to ensure that all stakeholders, such as air traffic controllers, maintenance technicians, technical experts, and industry representatives, are involved throughout the ground system approval process. FAA has also taken some action to address recommendations made by RTCA and the Aerospace Commission. Examples of some of the actions FAA has taken that address the management challenges that we found as well as RTCA and Aerospace Commission recommendations are discussed below: Coordinating FAA’s acquisitions offices and Office of Regulation and Certification efforts for approving systems with ground and aircraft components: FAA officials believe that the agency’s new Safety Management System, which is designed to formalize the agency’s safety process, will also improve coordination among FAA internal stakeholders once it is implemented. FAA stated that coordination would improve because as part of the new Safety Management System the agency plans to realign its organizational structure to create a formal link between the Air Traffic Organization and the Office of Regulation and Certification. Within the Office of Regulation and Certification, there is the newly created Air Traffic Safety Oversight Service, which oversees the safety operations of the Air Traffic Organization and collaborates with the Air Traffic Organization’s Safety Services. In addition, according to FAA officials, both ground systems and aircraft equipment will be more consistently assessed for their effect on safety as safety terminology is standardized. FAA expects full implementation to take 3 to 5 years. We are reserving judgment on whether this change will fully address the challenge because of the early state of this effort and because FAA’s problems with internal coordination when approving ATC systems are long-standing. In addition, because FAA has historically faced internal and external coordination challenges in approving ATC systems for safe use in the national airspace, we believe that as FAA moves forward with the agency’s new Safety Management System, it should, in the interim, develop plans that describe how both internal and external coordination will occur on a system-specific basis. In addition, plans to include external stakeholders are particularly important since the Safety Management System is not intended to address this challenge. Estimating the amount of time needed to meet complex technical requirements: During the development of WAAS and STARS, FAA adopted an incremental approach to developing and testing these systems to get them back on track, which is referred to as the “build a little, test a little” or spiral development approach. For example, to get WAAS back on track, FAA decided to take a more incremental approach to implementing the new navigation system—focusing more on the successful completion of research and development before starting system approval. In particular, FAA allowed time for collecting and evaluating data on key system performance requirements like the WAAS integrity requirement before moving forward. FAA officials acknowledged that the manner in which FAA decided to implement WAAS development before implementing this incremental approach was a high-risk approach and was a primary issue underlying the system’s problems. Some aviation stakeholders believe this approach is advantageous because, although it can increase costs initially, money can be saved in the long run because the approach may help to avoid mistakes that are very costly to fix once a system has been developed. This approach also helps to ensure that the necessary building blocks of a system are tested along the way through the early and ongoing involvement of key stakeholders, those who will use and maintain the system. These stakeholders are key to identifying critical omissions and issues that could prevent a system from operating as intended. As previously discussed, RTCA and the Aerospace Commission reviewed FAA’s approval process and made a number of recommendations to improve it. FAA has taken some action to address these recommendations. For example: In response to RTCA’s recommendation to implement a process in which the regulators and applicants come to an early and clear agreement on their respective roles, responsibilities, expectations, schedules, and standards to be used in certification projects, FAA issued The FAA and Industry Guide to Avionics Approval in 2001, which is intended to help FAA reduce the time and cost for the certification of aircraft equipment. This guide describes how to plan, manage, and document an effective, efficient aircraft equipment certification process and how to develop a working relationship between FAA and the applicant. In addition, as part of the 1999 FAA and Industry Guide to Product Certification, FAA encourages the manufacturers of aircraft equipment to develop a Partnership for Safety Plan that defines roles and responsibilities, describes how the certification process will be conducted, and identifies the milestones for completing the certification. A WAAS aircraft equipment manufacturer said that the certification of the WAAS aircraft equipment it developed went smoothly, primarily because of this up-front agreement with FAA. Although FAA’s actions address the aircraft equipment certification process, it does not have a similar process for its ground system approval process. In response to RTCA’s recommendation to establish an organizational focal point to provide one-stop service to users, industry, and other governments in all matters related to advanced ground electronics and aircraft equipment, FAA has completed a Web site that provides a broad range of information on the certification process for aircraft equipment. However, there is still no focal point to which industry can address questions about the approval process and be assured of getting a fully coordinated FAA answer. In response to the Aerospace Commission’s recommendation to streamline its aircraft equipment certification process to ensure timely development of regulations needed to address new technologies and to focus on certifying that manufacturing organizations have built safety into their processes for designing, testing, and ensuring the performance of an overall system, FAA proposed creating an Organizational Designation Authorization program in January 2004. The program would expand the approval functions of FAA organizational designees, standardize these functions to increase efficiency, and expand eligibility for organizational designees. FAA did not always include stakeholders throughout the process for approving ATC systems for safe use in the national airspace system. Including stakeholders is particularly important because the new ATC systems are more integrated today than in the past and thus require more coordination among all the stakeholders, particularly FAA’s Office of Regulation and Certification and the recently created Air Traffic Organization, but also between FAA and other stakeholders, such as technical experts, controllers, and maintenance technicians. When decisions regarding integrated ATC systems are made in isolation, they may contribute to the ineffective use of resources and time. We found that 3 of the 5 ATC systems we reviewed experienced cost growth and schedule delays, in part, because FAA did not always involve all necessary stakeholders, such as controllers and technical experts, throughout the approval process. In 2001, RTCA recommended that FAA implement a coordinated approval process that, among other things, would ensure that all stakeholders, including those outside FAA’s program offices, participate in all phases of the approval process. We agree with RTCA’s recommendation, which FAA has not fully implemented, and believe that fully implementing it would help address some of the challenges we found with FAA’s approval and certification processes. In addition, although FAA’s new Safety Management System and the planned alignment between FAA’s Air Traffic Organization and Office of Regulation and Certification have the potential to improve FAA’s internal coordination, FAA has just begun implementing these initiatives with full implementation 3 to 5 years away. FAA also has historically faced internal coordination challenges in approving ATC systems for safe use in the national airspace system as we found for each of the 3 integrated systems that we reviewed. We believe that the implementation of the Safety Management System, coupled with the new formal link between FAA’s Air Traffic Organization and Office of Regulation and Certification, will give FAA the opportunity to improve its internal coordination among its offices that are responsible for ground system approval and aircraft equipment certification. However, the system will not be implemented until 3 to 5 years. Therefore, because of FAA’s history of internal and external coordination challenges, such as the lack of effective coordination between FAA offices responsible for approving WAAS, which contributed to WAAS’ cost increase of about $1.5 billion and schedule delays of 6 years, we believe that specific plans for improving coordination both internally and externally on a system-specific basis are needed now. To ensure that key stakeholders, such as air traffic controllers, maintenance technicians, and technical experts, outside FAA’s acquisitions offices and Office of Regulation and Certification, are involved early and throughout FAA’s ground system approval process and to ensure better internal coordination between FAA’s offices responsible for approving ground systems and certifying aircraft equipment, we recommend that the Secretary of Transportation direct the Administrator of FAA to develop ATC system-specific plans early in the approval process that specify how and when the approving and certifying offices within FAA and other stakeholders, including controllers, maintenance technicians, technical experts, and industry representatives, will meet to ensure coordination. We provided a draft of this report to the Secretary of Transportation for review and comment. FAA generally agreed with our findings and recommendation and provided technical corrections, which we incorporated as appropriate. FAA also commented that it has started to take actions to improve its coordination efforts for integrated ATC systems. We are sending copies of this report to interested congressional committees, the Secretary of Transportation, and the FAA Administrator. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have questions on matters discussed in this report, please contact me on (202) 512-2834 or at siggerudk@gao.gov. GAO contacts and key contributors to this report are listed in appendix VII. To complete our first objective, to describe FAA’s process for approving air traffic control (ATC) systems for safe use in the national airspace system, we obtained and analyzed documents from the Federal Aviation Administration (FAA) and RTCA’s 1999 report that discussed FAA’s process for certifying aircraft equipment and approving ground systems. We also interviewed FAA officials, contractors, industry experts, and unions representing air traffic controllers and maintenance technicians that are involved in approving ATC systems. To complete our second objective, to describe the challenges FAA has faced approving ATC systems and how those challenges affected the cost, schedule, and performance estimates of the systems, we conducted case illustrations on 5 of FAA’s 25 air traffic control systems that are currently receiving funding: Airport Surface Detection Equipment - Model X (ASDE-X), Controller-Pilot Data Link Communications (CPDLC), Local Area Augmentation System (LAAS), Standard Terminal Automation Replacement System (STARS), and Wide Area Augmentation System (WAAS). We selected these 5 systems because collectively they accounted for about 46 percent of FAA’s ATC modernization costs in fiscal year 2002 and 3 of the 5 systems are integrated—that is, they require the approval of the ground systems as well as aircraft equipment. To select the 5 case illustration systems, we used FAA’s capital investment project data file. We met with knowledgeable FAA officials to discuss issues related to the accuracy and completeness of the data file, which was deemed adequate for the purpose of our work. We also met with knowledgeable FAA officials to determine the number of ATC systems from the data file that needed to be approved before entry into the national airspace system. For each of the case illustrations, we reviewed FAA documents, including acquisition program baseline reports, Joint Resource Council decisions, and briefing documents. We also reviewed GAO and Department of Transportation’s Inspector General reports and testimonies. In addition, we interviewed officials from FAA program offices; RTCA; the General Aviation Manufacturers Association; the Air Transport Association; the Aircraft Owners and Pilots Association; NavCanada; Transport Canada; the MITRE Corporation; Boeing; Garmin; Rockwell Collins; contractors, including Honeywell, Raytheon, and the Sensis Corporation; industry experts; the WAAS Integrity Performance Panel; the LAAS Integrity Panel members; and unions representing air traffic controllers and maintenance technicians. To compete our third objective, to describe actions FAA has taken to improve its processes for approving ATC systems, we interviewed representatives from FAA; RTCA; the Commission on the Future of the U.S. Aerospace Industry; aviation industry groups, including the General Aviation Manufacturers Association, the Air Transport Association, and the Aircraft Owners and Pilots Association; manufacturers of aircraft equipment, including Garmin and Rockwell Collins; Boeing; and contractors, including Honeywell, Raytheon, and the Sensis Corporation; industry experts; and unions representing air traffic controllers and maintenance technicians. We conducted our review in Washington, D.C., from October 2003 through September 2004 in accordance with generally accepted government auditing standards. ASDE-X is an airport surface surveillance system that air traffic controllers use to track aircraft and vehicle surface movements. (See fig. 2.) ASDE-X uses a combination of surface movement primary radar and multilateration sensors to display aircraft position and vehicle position on an ATC tower display. According to FAA, the integration of these sensors provides accurate, up-to-date, and reliable data for improving airport safety in all weather conditions. ASDE-X was developed to prevent accidents resulting from runway incursions, which have increased since 1993. The number of reported runway incursions rose from 186 in 1993 to 383 in 2001. According to FAA, because air traffic in the United States is expected to double by 2010, runway incursions may pose a significant safety threat to U.S. aviation. FAA expects that ASDE-X will increase the level of safety at airports and provide air traffic controllers with detailed information about aircraft locations and movement at night and in bad weather due to the (1) association of flight plan information with aircraft position on controller displays; (2) continuous surveillance coverage of the airport from arrival through departure; (3) elimination of blind spots and coverage gaps; and (4) availability of surveillance data with an accuracy and update rate suitable for, among other things, awareness in all weather conditions. In October 2003, FAA commissioned ASDE-X at Mitchell International Airport in Milwaukee, Wisconsin, for use in the national airspace system. ASDE-X came in close to its original schedule and cost baselines. The ASDE-X system was approximately 5 months over its original schedule baseline, but maintained its original performance baselines. In June 2002, FAA approved $80.9 million in additional funding to add ASDE-X at 7 additional sites. (See table 2.) FAA is currently scheduled to deploy ASDE- X at 25 U.S. airports over the next 4 years and to update existing surface detection systems (i.e., ASDE-3) at 9 other facilities. FAA plans to introduce an upgraded ASDE-X system at T.F. Green Airport in Providence, Rhode Island, with deployment tentatively slated for the 4th quarter of 2004. FAA is also investigating whether to add ASDE-X at 25 airports that use ASDE-3 and Airport Movement Area Safety Systems. Of the five systems we reviewed, FAA faced fewer schedule and cost challenges in approving ASDE-X for safe use in the national airspace system. This is partly because FAA included stakeholders early and throughout the approval process and because of the strong technical expertise of its managers. The ASDE-X program office brought in stakeholders, including maintenance technicians and air traffic controllers, beginning with the concept of operations phase and continued their stakeholder involvement through the requirements-setting, design-and- development, and test-and-evaluation phases and then continued involvement throughout the deployment phase. For example, FAA obtained the input of controllers and technicians at the beginning of the approval process, which helped to ensure that ASDE-X requirements were set at appropriate levels and not overspecified or underspecified. Stakeholders pointed toward the strong technical expertise of the program’s managers as a reason for the appropriate specification of ASDE-X’s requirements. In addition, FAA brought ASDE-X stakeholders together at technical meetings to provide input on ASDE-X design and development, which allowed the ASDE-X program office to design a system that met requirements and incorporated stakeholders’ needs. However, FAA did experience some challenges in approving ASDE-X. In response to Congress’ desire to deploy the system quickly, FAA attempted to accelerate ASDE-X’s approval. However, FAA experienced problems in accelerating the approval when it awarded the contract before all requirements had been finalized. Table 3 shows the major phases and time frames associated with the ASDE- X approval process. CPDLC will allow pilots and controllers to transmit digital data messages directly between FAA ground automation computers and suitably equipped aircraft. (See fig. 3.) CPDLC is a new way for controllers and pilots to communicate that is analogous to e-mail. The pilot can read the message displayed on a screen in the cockpit and respond to the message with the push of a key. In the future, this will alleviate frequency congestion problems and increase controller efficiency. One of the most important aspects of this technology is its intended reduction of operational errors from misunderstood instructions and readback errors. The initial phase (Build 1) consisted of four services: initial contact, altimeter setting, transfer of communication, and predefined instructions via menu text. The CPDLC program will ultimately develop additional capabilities in an incremental manner through further development stages. Originally, Build 1 was to be followed by Build 1A, which was designed to increase the CPDLC message set and include assignment of speeds, headings, and altitudes as well as a route clearance function. CPDLC was commissioned for initial daily use by controllers at Miami on October 7, 2002. This completed the stage called Build 1, which included four services. American Airlines is the CPDLC launch airline with about 25 aircraft operating in the Miami Center airspace. Further deployment of CPDLC has been deferred until about 2009 after the Joint Resources Council did not approve the program in April 2003. The council made this decision because it believed that the benefits of CPDLC did not outweigh the costs. A number of factors contributed to this decision. First, FAA had concerns about how quickly aircraft would install the new airborne equipment. Second, the approved program baseline was no longer valid as Build 1A investment costs had increased from $114.5 million to $181.7 million, while the number of locations decreased from 20 to 8 as shown in table 4. Third, CPDLC would add $83 million to the operations account. For fiscal year 2005, program officials requested $3 million for CPDLC. According to FAA, this amount would be suitable for shutdown of CPDLC at Miami, closeout of Build 1, and alternatives analysis for a follow-on program. The contractor, ARINC, had been providing messaging service for Miami at no cost. However, the contract for this free service expired on June 30, 2004. Lack of full coordination between FAA’s aircraft certification and acquisition offices, in which there would have been a full understanding of all requirements, compromised the schedule and cost of CPDLC. FAA’s acquisitions office, in the interest of meeting the original cost and schedule estimates, awarded the contract before FAA had a full understanding of system requirements, including those of FAA’s aircraft certification office. Requirements that specified in detail how the air and ground equipment would operate together were not yet completed prior to award of the Build 1A contract. The addition of CPDLC hardware and software requirements increased costs by $26 million, 39 percent of CPDLC’s Build 1A development cost growth. In addition, other system requirement changes after contract award increased CPDLC’s baseline development cost estimate by another $15 million. In total, these requirement additions increased costs by $41 million, almost 61 percent of the total cost increases associated with CPDLC Build 1A. (See tables 5, 6, and 7 for timelines of CPDLC’s ground system approval and aircraft equipment certification.) LAAS is a precision approach and landing system that will augment the Global Positioning System (GPS) to broadcast highly accurate information to aircraft on the final phases of a flight. LAAS is being developed specifically to provide augmentation to GPS satellites to support Category I, II, and III precision approach and landing capability to aircraft operating within a 20- to 30-mile radius of an airport. LAAS approaches are to be designed to avoid obstacles, restricted airspace, noise-sensitive areas, or congested airspace. In addition, a single LAAS ground station is to be capable of providing precision approach capability to multiple runways. LAAS has both ground and air components. LAAS ground components include four or more GPS reference receivers, which monitor and track GPS signals; very high frequency transmitters for broadcasting the LAAS signal to aircraft; and ground station equipment, which generates precision approach data and is housed at or near an airport. (See fig. 4.) LAAS users will have to purchase aircraft equipment to take advantage of the system’s benefits. FAA’s fiscal year 2005 budget request eliminated funding for LAAS, which is being moved from the acquisition program into a research and development effort. LAAS was slated for a 2006 rollout, but the target has now been deferred until at least 2009. FAA officials said they will reconsider national deployment when more research results are completed. Before FAA decided to suspend funding for LAAS in fiscal year 2005, the LAAS program office was negotiating with Honeywell to develop a plan for determining how to meet the integrity requirements for the LAAS Category I system. According to FAA officials, the LAAS program office will use the $18 million remaining in fiscal year 2004 to continue the LAAS Integrity Panel for developing the LAAS Category I system, to validate LAAS Category II/III requirements, and to solve radio frequency interference issues. The $18 million will last through 2005, and FAA’s goal is to meet LAAS integrity requirement by September 2005. Because of the budget cuts in fiscal year 2005, the LAAS program office will not be developing a Category II/III prototype. As shown in table 8, the LAAS Category I system was initially expected to be operational in 2002. However, FAA was unable to meet the milestone, primarily due to development and integrity requirement issues. According to FAA officials, the research needed to validate the integrity requirement of LAAS Category I is scheduled to be completed by September 2005. If funds are fully restored in fiscal year 2005, FAA officials said that a LAAS Category I system can be developed and deployed by fiscal year 2009. FAA faced a number of challenges in approving LAAS for safe use in the national airspace system, including (1) its inability to meet LAAS’ integrity requirement, (2) not always communicating with the contractor about what was required to satisfy LAAS ground system requirements, and (3) accelerating the LAAS schedule by setting milestones before designing the system. According to Honeywell officials, meeting the integrity requirement has been perhaps the most difficult part of approving LAAS for safe use in the national airspace system. Under FAA’s integrity requirement for LAAS, the system must alert the pilot with timely warnings when it should not be used. However, FAA has not been able to develop a solution to meet this requirement because it has not been able to prove that the system is safe during solar storms. According to FAA officials, one of the reasons that FAA has not been able to develop a solution to meet this requirement is that a solar storm’s effect on the ionosphere has not been modeled. The modeling is scheduled for completion in September 2004, and it will be used to design a monitor for ionosphere anomalies that could be developed and deployed by fiscal year 2009. FAA also did not always communicate with the contractor about what was required to satisfy LAAS ground system requirements. Initially, FAA was in a partnership with industry, including Honeywell and others, to develop a LAAS Category I precision approach and landing system, which has a 200- foot ceiling height and one-half mile visibility. FAA partnered with industry to develop LAAS because FAA would have to pay industry only if industry achieved preset milestones, such as an analysis of the LAAS system integrity requirement. However, the partnership was not able to develop a system that FAA believed would operate safely in the national airspace system. Consequently, FAA decided to acquire LAAS on its own. In April 2003, FAA awarded a contract to Honeywell to develop a LAAS Category I precision approach and landing system. At the time the contract was awarded, FAA believed that 80 percent of the LAAS was developed and met its ground system requirements based on a review of documents. However, 5 months later, after further review, FAA discovered that only about 20 percent of development was complete. Nevertheless, Honeywell believes it met 80 percent of the LAAS requirements. Both parties attribute the disagreement to lack of communication about what was needed to satisfy the LAAS ground system requirements. In fiscal year 2005, FAA decided to suspend funding and placed LAAS into its research and development program due to a lack of software development and the inability of the system to meet the integrity requirement. According to FAA officials, the research needed to validate the integrity requirement of LAAS Category I is scheduled to be completed by September 2005. If funds are fully restored in fiscal year 2005, FAA believes that a LAAS Category I system can be developed and deployed by fiscal year 2009. FAA also experienced challenges in approving LAAS because it accelerated the schedule in 1998 to meet system milestones before completely designing the system and developing a solution for meeting the LAAS integrity requirement. FAA originally planned to deploy LAAS in 2002 but had to subsequently delay deployment to 2006 because of additional development work, evolving requirements, and unresolved issues regarding how the system would be approved. Lack of a solution for verifying that its integrity requirement had been met and incomplete software development were significant approval issues facing the LAAS program. Table 9 shows the major phases and time frames for approving the LAAS ground system. LAAS aircraft equipment received design and production approval in August 2004. It still awaits installation approval. (See table 10.) Because LAAS’ aircraft and ground components are linked, certification of LAAS aircraft equipment has been affected by delays occurring during ground system approval. For example, according to aviation industry officials, requirement additions on LAAS’ ground system led to requirement additions on LAAS’ aircraft equipment. According to aviation industry officials, the addition of requirements to the ground system increased the cost and time to develop aircraft equipment, which changed the calculation for industry about whether developing LAAS aircraft equipment was a worthwhile investment and discourages future investment in aircraft equipment that will modernize the national airspace system. FAA’s aircraft certification office completed the design and production approval of LAAS aircraft equipment without coordinating with the offices responsible for acquisition to determine the consequences of certifying aircraft equipment before approval of the associated ground system. According to an FAA official, once the aircraft certification office has given design and production approval to the LAAS aircraft equipment, it is not possible to make a change to the requirements for the aircraft equipment so that they are better integrated with the associated LAAS ground system. Consequently, LAAS ground system developers may have to make more costly and time-consuming changes to the ground system than would have been necessary if the aircraft certification and acquisitions offices had coordinated their efforts. STARS is a joint Department of Transportation, FAA, and Department of Defense (DOD) program established under 31 U.S.C. 1535, the Economy Act, as amended, to replace aging FAA and DOD legacy terminal automation systems with state-of-the-art terminal ATC systems. The joint program is intended to avoid duplication of development and logistic costs while providing easier transition of controllers between the civil and military sectors. Civil and military air traffic controllers across the nation are using STARS to direct aircraft near major airports. FAA’s goal for STARS is to provide an open, expandable terminal automation platform that can accommodate future air traffic growth and allow for the introduction of new hardware- and software-based tools to promote safety, maximize operational efficiency, and improve controllers’ productivity. FAA believes that STARS will facilitate efforts to optimally configure the terminal airspace around the country, exchange digital information between pilots and controllers, and introduce new position and surveillance capabilities for pilots. (See fig. 5.) In June 2003, FAA first commissioned STARS for use at the Philadelphia International Airport in Pennsylvania. Currently, STARS is fully operational at 25 FAA terminal radar control facilities and 17 DOD facilities. Under the Air Traffic Organization’s new business model of breaking large and complex programs into smaller phases to control cost and schedule, STARS is a candidate for further deployment to about 120 FAA terminal radar control facilities. As shown in table 11, in April 2004, FAA changed STARS’ cost and schedule estimates for the third time and now estimates that it will cost $1.46 billion to deploy STARS at the 50 most important terminal radar control facilities that provide air traffic control services to 20 of the nation’s top 35 airports. The original baseline in February 1996 was $940 million for 172 systems. The April 2004 estimate is an increase of about $500 million for 122 fewer systems (i.e., over 70 percent less) than originally planned. FAA faced challenges in approving STARS. Although controllers and technicians were involved in developing requirements for STARS prior to the 1996 contract award to Raytheon, the original approved acquisition plan provided only limited human factors evaluation from controllers and technicians during STARS’ design and development phase. The acquisition approach was to employ a commercial off-the-shelf system with limited modifications, and the competition was limited to companies with already operational ATC systems. In 1997, FAA controllers, who were accustomed to using the older equipment, began to voice concerns about computer- human interface issues that could hamper their ability to monitor air traffic. For example, the controllers noted that many features of the old equipment could be operated with knobs, allowing controllers to focus on the screen. By contrast, the STARS commercial system was menu-driven and required the controllers to make several keystrokes and use a trackball, diverting their attention from the screen. The maintenance technicians also identified differences between STARS and its backup system that made monitoring the system less efficient. For example, the visual warning alarms and color codes identifying problems were not consistent between the two systems. In 1997, FAA, the National Air Traffic Controllers Association, the Professional Airways System Specialists, and Raytheon formed a team to deal with these computer-human interface issues. The team identified 98 air traffic and 52 airway facilities computer- human interface enhancements to address these issues. FAA and Raytheon restructured the contract to address the technicians’ and controllers’ concerns. According to FAA, not involving controllers and maintenance technicians caused FAA to revise its strategy for approving STARS, which FAA estimates added $500 million and 3 years to the schedule. The original STARS cost estimate of $940 million included limited human factors evaluations and the use of a basic commercial off-the-shelf configuration. This acquisition strategy was replaced by an incremental development strategy that incorporated up front the majority of human factors considerations and additional functionality that were not included in the original cost estimate. This new acquisition strategy added years to the development schedule and significantly increased the system’s requirements specifications. These additional requirements resulted in both cost and schedule growth. FAA’s own guidance showed that limiting human factors evaluations will result in higher costs and schedule delays. Initially, it is more expensive (in terms of time and funding) to deal with human factors considerations than to ignore them. However, an initial human factors investment pays high dividends, in terms of costs and schedule, in later stages of acquisition when changes are more costly and difficult to make. FAA also experienced challenges in approving STARS, partly, because of aggressive scheduling. FAA’s approach to approving STARS was oriented to rapid deployment to meet critical needs. To meet these needs, FAA compressed its original development and testing schedule from 32 months to 25 months. This acceleration in schedule left only limited time for human factors evaluations and not enough time for involvement of controllers and maintenance technicians. Table 12 shows the major phases and time frames associated with the STARS approval process. WAAS is a GPS-based navigation and landing system. According to FAA, WAAS is to improve safety by providing precision guidance to aircraft in all phases of flight at thousands of airports and landing strips, including runways, where there is no ground-based landing capability. To use WAAS for navigation, an aircraft must be equipped with a certified WAAS receiver that is able to process the information carried by GPS and WAAS geostationary satellite signals. Pilots are able to use this information to determine their aircrafts’ time and speed, and latitude, longitude, and altitude positions. WAAS currently consists of a network of 25 ground reference stations, 2 leased geostationary satellites, 2 master stations, and 4 uplink (ground earth) stations. The ground reference stations are strategically positioned across the United States to collect GPS satellite data. (See fig. 6.) WAAS is designed to improve the accuracy, integrity, and availability of information coming from GPS satellites and to correct signal errors caused by solar storms, among other things. FAA expects that WAAS will improve the national airspace system by (1) increasing runway capability; (2) reducing separation standards that allow increased capacity in a given airspace without increased risk; (3) providing more direct en route flight paths; (4) providing new precision approach services; (5) reducing the amount of and simplifying equipment on board aircraft; (6) saving the government money due to the elimination of maintenance costs associated with older, more expensive ground-based navigation aids; and (7) providing vertical guidance in all phases of flight to improve safety. In July 2003, FAA commissioned WAAS to provide initial operating capability for 95 percent of the United States. In July 2003, the first of the LPV approaches were provided whereby pilots could safely descend to a 250-foot decision height. As of August 2004, there were about 20 LPV landing procedures published for WAAS. With over 4,000 runways needing them, much work still needs to be done to fully utilize the WAAS capability. FAA expects to have WAAS available in the rest of the country, with the exceptions of a few parts of Alaska, by the end of 2008 when it completes the addition of 13 ground reference stations and 2 leased geostationary satellites. WAAS is not scheduled to achieve full (Category I) operating capability, the final phase of WAAS when pilots will be able to use it to navigate as low as 200 feet above the runway, until the 2013-2019 time frame. As shown in table 13, FAA changed WAAS’ cost and schedule estimates for the third time in May 2004. According to FAA, the reasons for the May 2004 rebaselining were that the system was not able to achieve full Category 1 capability and because of FAA internal and congressional budget cuts. Under the May 2004 baseline, FAA estimates that WAAS development costs will be about $2.0 billion, which is $1.5 billion higher than the 1994 estimated development costs. Also, FAA has not yet met some of its original performance goals, such as providing pilots with the ability to navigate as low as 200 feet above the runway. According to FAA, WAAS cannot easily achieve Category I as a single frequency system because the error sources caused by solar storms are difficult to correct without the use of a second civil aviation frequency in space, which is the responsibility of the Department of Defense. FAA, realizing the difficulty and risk associated with developing a single frequency Category I system, decided to wait and leverage the benefits of the White House policy to include the second civil frequency on the GPS satellite network. According to FAA, budget cuts and the decision to wait until the second civil frequency is placed on the GPS constellation have caused it to extend the timeline for reaching WAAS’ full Category I operating capability to between 2013 and 2019. FAA faced challenges in approving WAAS ground and satellite components for use in the national airspace system, partly because of FAA’s accelerated scheduling, lack of effective coordination between its aircraft certification office and acquisitions office, and technical challenges which resulted in a delay meeting the integrity requirement. FAA’s challenges in approving WAAS began in 1994 when FAA accelerated the implementation of milestones, including moving up the commissioning of WAAS by 3 years. FAA originally planned to commission WAAS in 2000; however, at the urging of government and aviation industry groups in the 1990s, it decided to change WAAS’ commissioning date to 1997. FAA tried to develop, test, and deploy WAAS within 28 months, despite the fact that software development alone was expected to take 24 to 28 months. FAA also set system milestones before completing the research and development required to prove the system’s capability. Although FAA attempted to accelerate the implementation of WAAS, it wasn’t until July 2003, 6 years later, that it was able to commission WAAS with initial operating capability. Lack of full involvement between FAA’s aircraft certification members and the rest of the integrated product team contributed to delays in approving WAAS. For example, although an integrated product team, which included representatives from aircraft certification and acquisition offices, was developing WAAS, it was not until September 1999, when the aircraft certification office became fully involved, that FAA recognized (1) the difficulty of meeting the integrity requirement—that WAAS must alert the pilot in a timely manner when the system should not be used—and (2) it did not have the technical expertise needed. According to FAA officials, the reason coordination did not occur was because the two offices had competing priorities, such as the day-to-day aircraft equipment certification activities not associated with the development of a new ATC system. This situation may have developed because FAA’s aircraft certification organization is more accustomed to being involved after a project is developed, rather than actively participating throughout project development. The need to meet WAAS’ integrity requirement also hampered FAA’s ability to approve WAAS for safe use in the national airspace system. In December 1999, FAA found that WAAS did not meet the agency’s integrity requirement for precision approaches, and FAA recognized that it did not have the technical expertise required to resolve the issue. Therefore, in 2000, FAA established a team of satellite navigation experts, which was referred to as the WAAS Integrity Performance Panel and included representatives from the MITRE Corporation, Stanford University, Ohio University, and the Jet Propulsion Laboratory. Developing a solution to prove that the WAAS design met the integrity requirement added about 2 years and 4 months to the approval process and contributed to WAAS’ cost growth. All of these challenges contributed to a 6-year delay in WAAS’ commissioning and a $1.5 billion increase in its estimated total development costs through 2028, exclusive of operating and maintaining geostationary satellites, which were not part of WAAS’ original 1994 baseline. Table 14 shows the major phases and time frames associated with approving WAAS’ ground system. In contrast to the challenges that it encountered during the approval of the WAAS ground system, FAA did not encounter major challenges with the certification of WAAS aircraft equipment, primarily because FAA had an up-front approval agreement with one of the first applicants, United Parcel Service Aviation Technology, through the creation and approval of a safety plan and a project-specific certification plan. Table 15 shows the major phases and time frames associated with certifying the aircraft equipment of WAAS. Currently, WAAS GPS receivers have been certified and are available for use. In addition to the individuals named above, other key contributors to this report were Geraldine Beard, Gerald Dillingham, Seth Dykes, David Hooper, Kevin Jackson, Gregg Justice III, Donna Leiss, and Kieran McCarthy.
The Federal Aviation Administration's (FAA) process for ensuring that air traffic control (ATC) systems will operate safely in the national airspace system is an integral part of the agency's multibillion-dollar ATC modernization and safety effort. GAO was asked to review (1) FAA's process for approving ATC systems for safe use in the national airspace system; (2) challenges FAA has faced approving ATC systems and how these challenges affected the cost, schedule, and performance estimates of the systems; and (3) actions FAA has taken to improve its process for approving ATC systems. FAA has separate processes for approving ground systems and certifying aircraft equipment for safe use in the national airspace system. FAA's process for approving ground systems, such as radar systems, is done in accordance with policies and procedures in FAA's Acquisition Management System. Approving ground systems, which are usually developed, owned, and operated by FAA, typically involves FAA's Air Traffic Organization determining whether a vendor is in compliance with contract requirements, followed by a rigorous test-and-evaluation process to ensure that the new system will operate safely in the national airspace system. The process for certifying aircraft equipment, which is usually developed by private companies, is done in accordance with Federal Aviation Regulations, with FAA serving as the regulator. If a system has both ground components and aircraft equipment components, then the system must go through both processes before it is approved for safe use in the national airspace system. FAA has faced challenges approving systems for safe use in the national airspace system that contributed to cost growth, delays, and performance shortfalls in deploying these systems. We identified three specific challenges through the review of 5 ATC systems and our past work. These challenges are the need to (1) involve appropriate stakeholders, such as users and technical experts, throughout the approval process; (2) ensure that the FAA offices that have responsibility for approving ground systems and certifying aircraft equipment effectively coordinate their efforts for integrated systems; and (3) accurately estimate the amount of time needed to meet complex technical requirements at the beginning of the design and development phase. FAA has taken some actions to address two of the three challenges we identified. However, FAA has not taken action to fully involve all stakeholders, such as air traffic controllers and technical experts, throughout the approval process. FAA officials believe that the agency's new Safety Management System will help ensure that the ground system approval and aircraft certification processes are better coordinated. FAA stated that coordination would improve because, as part of the new Safety Management System, the agency plans to realign its organizational structure to create a formal link between the Air Traffic Organization and the Office of Regulation and Certification. FAA expects full implementation of this system to take 3 to 5 years. We are reserving judgment on whether this change will fully address the challenge because of the early state of this effort and FAA's long-standing problems with internal coordination when approving ATC systems. As such, we believe that FAA should, in the interim, develop specific plans that describe how both internal and external coordination will occur on a system-specific basis.
Program evaluations are systematic studies that use research methods to address specific questions about program performance. Evaluation is closely related to performance measurement and reporting. Whereas performance measurement entails the ongoing monitoring and reporting of program progress toward preestablished goals; program evaluation typically assesses the achievement of a program’s objectives and other aspects of performance in the context in which the program operates. In particular, evaluations can be designed to isolate the causal impacts of programs from other external economic or environmental conditions in order to assess a program’s effectiveness. Thus, an evaluation study can provide a valuable supplement to ongoing performance reporting by measuring results that are too difficult or expensive to assess annually, explaining the reasons why performance goals were not met, or assessing whether one approach is more effective than another. Evaluation can play a key role in program planning, management, and oversight by providing feedback on both program design and execution to program managers, legislative and executive branch policy officials, and the public. In our 2013 survey of federal managers, we found that while only about a third had recent evaluations of their programs or projects, the majority of those who had evaluations reported that they contributed to understanding program performance, sharing what works with others, and making changes to improve program management or performance. GPRAMA made changes to agency performance management roles, planning and review processes, and reporting intended to ensure that agencies used performance information in decision making and were held accountable for achieving results and improving government performance. The act required the 24 CFO Act agencies and OMB to establish agency and governmentwide cross-agency priority goals, review progress on those goals quarterly, and report publicly on their progress and strategies to improve performance, as needed, on a governmentwide performance website. It also encouraged a more detailed and comprehensive understanding of those strategies by requiring agencies to identify and coordinate the program activities, organizations, regulations, policies, and other activities—both internal and external—that contribute to each agency priority goal. GPRAMA, along with related OMB guidance, established and defined performance management responsibilities for agency officials in key management roles: the Chief Operating Officer (COO), the PIO, and a goal leader responsible for coordinating efforts to achieve the cross-agency and agency priority goals. The PIO role was created in 2007 by executive order. GPRAMA established the role in law and specified that it be given to a “senior executive” at each agency who reports directly to the agency’s COO or to its deputy agency head. The PIO is to advise the head of the agency and the COO on goal setting, measurement, and reviewing progress on the agency priority goals. OMB guidance gave PIOs a central role in promoting agency use of evaluation and other evidence to improve program performance, describing their roles as “. . . driving performance improvement efforts across the organization, by using goal-setting, measurement, analysis, evaluation and other research, data-driven performance reviews on progress, cross-agency collaboration, and personnel performance appraisals aligned with organizational priorities.” “Help components, program office leaders and goal leaders to identify and promote adoption of effective practices to improve outcomes, responsiveness and efficiency, by supporting them in . . . securing evaluations and other research as needed . . . and creating a network for learning and knowledge sharing about successful outcome- focused, data-driven performance improvement methods across all levels of the organization and with delivery partners.” The act also charged the Performance Improvement Council (PIC), the Office of Personnel Management (OPM), and OMB with responsibilities to improve agency performance management capacity. The PIC is an interagency council that was created by executive order, but GPRAMA established it in law and specified that it would be chaired by the OMB Deputy Director for Management and that membership would include the PIOs from all 24 CFO Act agencies, as well as any others. The PIC’s duties include facilitating agencies’ exchange of successful practices and the development of tips and tools to strengthen agency performance management, and assisting OMB in implementing certain GPRAMA requirements. The PIC holds “principals only” and broader meetings open to other agency staff, has formed several working groups that focus on issues relating to implementing GPRAMA and related guidance, and provides a networking forum for staff from different agencies who are working on similar issues. In 2012 through 2014, OMB and the PIC supported several interagency forums on evaluation and evidence that were open to all federal agency staff. The act charged OPM with (1) identifying key skills and competencies needed by federal employees for developing goals, evaluating programs, and analyzing and using performance information for improving governmental efficiency and effectiveness; (2) incorporating those skills and competencies into relevant position classifications; and (3) working with agencies to incorporate these skills and competencies into agency training. OPM identified core competencies for performance management staff, PIOs, and goal leaders and published them in a January 2012 memorandum. OPM identified relevant existing position classifications that are related to the competencies and worked with the PIC Capacity Building working group to develop related guidance and tools for agencies. In December 2012, the PIC released a draft Performance Analyst position design, recruitment, and selection toolkit. OPM worked with the Chief Learning Officers Council and the PIC Capacity Building working group to develop a website—the Training and Development Policy wiki—that lists some resources for personnel performance management and implementing GPRAMA. OPM is currently conducting pilot studies through 2015, in collaboration with the Chief Human Capital Officers Council, of how to build staff capacity in several competencies identified as mission critical across government, including data analysis. OPM officials also noted that they make databases, such as the Federal Employee Viewpoint Survey, available to agencies for their staff to use in program evaluations. OMB has taken several steps to help agencies develop evaluation capacity by issuing guidance, promoting the exchange of evaluation expertise through the PIC, and working selectively with certain agencies. Since 2009, OMB has issued several memorandums urging efforts to strengthen the use of rigorous impact evaluation, and demonstrate the use of evidence and evaluation in budget submissions, strategic plans, and performance plans. In May 2012, OMB encouraged agencies to designate a high-level official responsible for evaluation who could develop and manage the agency’s research agenda and provide independent input to agency policymakers on resource allocation and to program leaders on program management. In July 2013, the Directors of OMB, the Domestic Policy Council, the Office of Science and Technology Policy, and the Chairman of the Council of Economic Advisers, jointly issued a memorandum encouraging agencies to adopt an “evidence and innovation agenda”: applying existing evidence on what works, generating new knowledge, and using experimentation and innovation to test new approaches to program delivery. In particular, the memorandum encouraged agencies to exploit existing administrative data to conduct low-cost experiments, and implement outcome-focused grant designs and research clearinghouses to catalyze innovation and learning. OMB staff established an interagency group to promote sharing of evaluation expertise, and organized a series of workshops and interagency collaborations. The workshops addressed issues such as potential procedural barriers to evaluation (e.g., the Paperwork Reduction Act information collection reviews) and promising practices for collecting evidence (e.g., developing a common evidence framework). OMB staff facilitated the collaboration of staff from the Department of Education and the National Science Foundation in developing common standards of evidence for reviewing research proposals, and another group of agencies in developing a common framework of standards for reviewing completed evaluations. Studies of organization or government evaluation capacity have found that it requires analytic expertise and access to credible data as well as organizational support both within and outside the organization to ensure that credible, relevant evaluations are produced and used. Our survey found levels of evaluation expertise, support, and use uneven across the government. For example, 7 of the 24 agencies have central leaders responsible for evaluation; in contrast, 7 agencies reported having no recent evaluations for any of their performance goals. To address our first objective and guide our assessment of agency evaluation capacity, we reviewed the research and policy literature on evaluation capacity, including assessments of agencies in Canada and the United Kingdom, and guidance from the American Evaluation Association (AEA) and the United Nations Evaluation Group. While the details vary, these frameworks commonly emphasize three general categories of elements of organizational, especially national, evaluation capacity: An enabling environment supporting the use of evidence in management and policymaking: credible information and statistical systems, legislation or policies to institutionalize monitoring and evaluation, public interest in evidence of government performance, and senior leadership commitment to transparency, accountability, and managing for results. Organizational resources to support the supply and use of credible evaluations: a senior evaluation leader; an evaluation office with clearly defined roles and responsibilities, a stable source of funding, and independence; an evaluation agenda, policies and tools to ensure study credibility and utility; staff expertise and access to experts; and collaboration with program managers and stakeholders. Evaluation results and use: evaluation quality and credibility; coverage of the agency’s key programs or goals; transparent reporting and public dissemination of reports; recommendation follow-up; and the use of evaluation results in program management, policy making, and budgeting. To learn about federal agencies’ evaluation capacity, we surveyed the PIOs or their deputies at the 24 CFO Act agencies because of the central role GPRAMA and OMB assigned them to promote agency performance assessment and improvement efforts. Our 2012 survey of PIOs found that they held senior leadership positions and that most of them were involved in the central aspects of agency performance management to a large extent. Although the PIO position was created in 2007, only one of the initial PIOs continued to hold this position at the time of our 2014 survey. Half had started serving in this position within the past 2 years. Many of our survey respondents held key senior leadership positions in their agencies: 8 PIOs served as the agency’s Chief Financial Officer, another 4 as Assistant Secretary or Deputy for Administration or Management. Seventeen reported to their agency’s COO, 2 to the agency’s administrator or commissioner, and 3 to the agency’s CFO. In order to report on the policies and practices of offices throughout these agencies, we encouraged the PIOs to consult with others when completing the survey and several indicated that they did so. GPRA represents a central component of the enabling environment for U.S. government evaluation capacity by providing, for over 20 years, a statutory framework for performance management and accountability across the government. Accordingly, most PIOs reported that their senior leadership demonstrated a commitment to using evidence in management and policy making through agency guidance (17), internal agency memorandums (12), congressional hearings (9), and speeches (8). Other avenues offered in comments included budget justifications (10) and town hall meetings or videos for agency managers and staff (2). Moreover, as we have noted previously, GPRA has produced a solid foundation of generating and reporting performance information. Three- quarters of the agencies (18) said that reliable performance data are available on outcomes for all their priority goals, 3 more said data are available for more than half their priority goals. (One of the independent agencies was exempt from developing priority goals.) However, our survey respondents indicated that congressional interest in and requests for program evaluation are not widespread. Although the federal government has long invested in evaluation, about half the agencies (13) reported having explicit agency-wide authority to use appropriated funds for evaluation. Some pointed to specific legislative authorities, while one PIO commented, “Evaluation is considered inherent to responsible management and programs use appropriated fund for this purpose.” Less than half the agencies (10) indicated that they had congressional mandates to evaluate specific programs. However, one- third (7) indicated that they had neither explicit agency-wide authority nor a program-specific requirement to conduct evaluations. The importance of this is that in a prior study agency evaluators told us that not having explicit evaluation authority represented a barrier to the use of program funds for evaluation. Our survey asked the PIOs about the agency resources and policies committed to obtaining credible, relevant evaluations. Their responses indicated uneven levels of development across the agencies. About half the agencies (11) reported committing resources to obtain evaluations by establishing a central office responsible for evaluating agency programs, operations, or projects. However, less than a third of agencies have an evaluation plan or agency-wide policies or guidance for ensuring study credibility. About one-third of the agencies (7) reported having assigned responsibility to a single high-level official to oversee their evaluation studies. Although agencies do not need a central evaluation leader in order to conduct credible evaluations, establishing such a position with clear responsibilities sends a message about the importance of evaluation to agency managers. Almost all these individuals (6) were responsible for setting these agencies’ evaluation agendas but only half (3) were responsible for following-up evaluation recommendations. Similar numbers of departments and independent agencies reported having such officials with titles such as Chief Evaluation Officer, Chief Strategic Officer, and Assistant Secretary. According to AEA guidance, a central evaluation office can promote an agency’s evaluation capacity and provide a stable organizational framework for planning, conducting, or procuring evaluation studies. All the agencies with a single official responsible for overseeing evaluations also reported having a central office responsible for evaluating agency programs, operations, or projects, but only about half the agencies in total (11) had a central office. The central offices could have other responsibilities as well, such as strategic planning. Most of these offices were said to be independent of program offices in making decisions about evaluation design, conduct, and reporting and to have access to analytic expertise through external experts or contractors, but about half were reported to have a stable source of funding (6). Funding generally came through regular appropriations, although two agencies reported having evaluation set-asides—that is, the ability to tap a percentage of operating divisions’ appropriations for evaluation. A larger proportion of independent agencies (5 of 9) than departments (6 of 15) reported having central offices. As discussed earlier, having analytic expertise is a critical element of evaluation capacity. Most agencies with a central office responsible for evaluations (7—8 of 11) reported that the evaluation staff had training and experience to a great or very great extent in each of the following areas: research design and methods, data management and statistical analysis, performance measurement and monitoring, and translating evaluation results into actionable recommendations. Slightly fewer reported that central evaluation office staff had great or very great subject matter expertise (5). Three survey respondents also volunteered that their staff had additional expertise, including economic analysis, geographical information systems and Lean cost reduction analysis. Organizations, whether government agencies or professional societies, develop written policies or standards in order to provide benchmarks for ensuring the quality of their processes and products. AEA has published guides for the individual evaluator’s practice and for developing and implementing U.S. government evaluation programs. About one-quarter of agencies reported having agency-wide written policies or guidance for key issues addressed in those guides: ensuring internal or external evaluator independence and objectivity; ensuring completeness and transparency of evaluation reports; selecting and prioritizing evaluation topics; consulting program staff and subject matter experts; selecting evaluation approaches and methods; timely, public dissemination of evaluation findings and recommendations; or tracking implementation of evaluation findings. A few more agencies, but less than half, reported having policies on ensuring quality of data collection and analysis, which could apply to research as well as program evaluation. Central evaluation leadership was not required to adopt evaluation policies; as only about half of the agencies with agency-wide evaluation policies had a central evaluation office. Agencies provided us with examples of guidance on information quality or scientific integrity as well as program evaluations specifically. We, along with OMB and AEA, have all noted that developing an evaluation agenda is important for ensuring that an agency’s often scarce research and evaluation resources are targeted to the most important issues and can shape budget and policy priorities and management practices. Less than a third of the agencies (7) reported having an agency-wide evaluation plan. Most such plans were reported to cover multiple years and programs across all major agency components. Senior agency officials and program managers were said to have been consulted in developing all these plans, but few agencies reported consulting congressional stakeholders or researchers. All but 1 of the 7 agencies that had a plan also had a central evaluation office. Because we found in a previous report that stakeholder involvement facilitates the use of evaluation studies, we asked whether stakeholders were consulted in designing and conducting evaluation studies, either formally or informally. Almost all the PIOs reported consulting senior agency officials (20) and program managers (21) and three-quarters consulted researchers, but few (5) reported consulting congressional staff, less than local program providers or regulated entities. Agency evaluation offices are located at different organizational levels, which we have previously found affects the scope of their program and analytic responsibilities as well as the range of issues they consider. In a previous study, we found that evaluators in central research and evaluation offices described having a broader and more flexible choice of topics than did evaluators in program offices. In our 2014 survey, half the federal agencies (12) reported that some agency components (such as an administration or bureau) had a central office responsible for evaluation and that the number of such components ranged from 1 to 12 within a department or independent agency. These offices generally existed in addition to, rather than instead of an agency-wide office responsible for evaluation; as a result, 10 agencies had neither type of office. As might be expected, component offices were less likely than central offices to be considered independent of program offices (6 of 12 agencies reported that all or many of their offices had independence in decision making), but 10 of 12 reported that all or many of these offices had access to external experts, and, like the central offices, few reported having a stable source of funds. About half the agencies with component central offices for evaluation reported that the evaluation staff had training and experience to a great or very great extent in research design and methods, data management and statistical analysis, performance measurement and monitoring, and translating evaluation results into actionable recommendations. These were slightly lower than the ratings for the central office staff’s training. As one might expect, staff were characterized as having great to very great subject matter expertise more often in component offices (9 of 12) than in central evaluation offices (5 of 11). Only a few PIOs (2 to 4) reported that many or all component central offices for evaluation had written evaluation policies or guidance for any of the issues we listed. More often PIOs (2 to 6) reported not knowing if they had those specific policies. To assess the results or outcomes of agency evaluation activity, our survey asked the PIOs about the characteristics of the evaluations they produced and their use in decision making. In line with the level of resources they committed to evaluation, the availability and use of program evaluations were uneven across the 24 federal agencies. Even though agencies may not have many evaluations, more than a third report using them from a moderate to a very great extent to support several aspects of program management and policy making. Because agencies use the term “program” in different ways, we chose to assess agencies’ evaluation coverage of key programs and missions by the proportion of performance goals for which evaluations had been completed in the past 5 years or were in progress. The number of performance goals may vary across agencies but, per OMB guidance, they are supposed to be specific, near-term, realistic targets that an agency seeks to influence to advance its mission, and publicly reports. Only four agencies reported full evaluation coverage of their performance goals. Two-thirds of the agencies reported evaluation coverage of less than half their performance goals; including 7 that reported having evaluations for none of their performance goals. Evaluation coverage was greater in agencies that established centralized authority for evaluation. Three of the 4 agencies with full coverage of their performance goals had both a central evaluation leader and central evaluation office, while all 7 agencies with no coverage had neither. Interestingly, 2 of the 7 agencies that reported having no evaluations of their performance goals did report having component evaluation offices, so they might have had some evaluations that simply did not address topics considered key to advancing their mission. GAO guidance notes that strong evaluations rely on sufficient and appropriate evidence; document their assumptions, procedures, and modes of analysis; and rule out competing explanations. Thus, transparent reporting of data sources and analyses are critical for ensuring that evaluations are considered credible and trustworthy. About half the PIOs (10) reported that their evaluation reports are transparent to a great or very great extent in describing the data sources used and the analyses performed forming the basis of conclusions; another 7 indicated that they did not know or did not respond to the question. According to the evaluation capacity literature, timely, public dissemination of evaluation findings is important to support government accountability for results to the legislature and the public and to ensure that findings are available to inform decision making. Half the agencies (11) reported publicly disseminating their evaluation results by posting reports to a searchable database on their websites; fewer reported presenting findings at professional conferences (9), sending a notice and link to the report through electronic mailing lists (7), or conducting webinars on findings for the policy community (6). A couple of the PIOs commented that they post some, but not all, reports on the agency website. Of the 11 agencies posting evaluation reports to a website, half reported that they did so within 3 months of completion, although 1 indicated it can take from 6 months to a year. In addition, a few agencies sponsor research clearinghouses that review evaluations of social interventions and provide the results in searchable databases on their websites to help managers and policy makers identify and adopt effective practices. If program evaluations or any form of performance information are to lead to performance improvement, they must be acted on. Seven agencies reported that they had procedures for obtaining management’s response to evaluation recommendations, 8 for obtaining follow-up action on those recommendations. In their comments, a few PIOs noted that they had policies for responding to reports or recommendations from GAO or the Inspector General. Another PIO reported that a number of internal briefings are held to ensure management awareness of evaluation findings as well as a cross-agency research utilization committee composed of staff from program, public affairs, and congressional and intergovernmental relations offices that decides on the appropriate level of publicity effort for the report. Over a third of the agencies (9 to 10) reported that evaluations were used to a moderate or greater extent to support policy changes, budget changes, or internal proposals for change in resource allocation or management, or to award competitive grants (figure 1). Five agencies reported using evaluation to support all these activities to a moderate or greater extent on average. In comments, PIOs described a variety of ways in which evaluation evidence could be used in awarding competitive grants: reviewing the merit of research proposals, evaluating grantee prior performance and outcomes, assessing credit worthiness, and allocating tiered evidence-based funding, which varies the level of funding based on the extent and quality of the evaluation evidence supporting a program’s effectiveness. Agencies with centralized evaluation authority, independence, and expertise reported greater evaluation use in management and policy making, demonstrating its importance. More than half of the 7 agencies that reported great use of evaluation had a senior evaluation leader or a central evaluation office. Moreover, the agencies whose central offices were independent of the program office, those with access to external experts or contractors, and those whose staff were rated as having great or better expertise in research methods and subject matter reported greater use of evaluation in decision making. GPRAMA was enacted in January 2011, revising existing GPRA provisions and adding new reporting requirements. Around the same time, OMB increased its outreach to agencies to encourage them to conduct program evaluations. We assessed change in agency evaluation capacity in this period through survey questions about when an office started conducting evaluations and whether the frequency of certain activities had changed. While organizational changes in evaluation capacity were few during this period, half the agencies reported a greater use of evaluation in decision making since 2010. Organizational evaluation capacity has grown some since 2010. One-third of the agencies have a high-level official responsible for oversight of the agency’s evaluation studies, and 2 of those 7 positions were created after 2010, both in 2013. In fact, in its May 2012 memorandum, OMB encouraged agencies to designate a high-level official responsible for evaluation who can Conduct or oversee rigorous and objective studies; Provide independent input to agency policymakers on resource “Develop and manage the agency’s research agenda; allocation and to program leaders on program management; Attract and retain talented staff and researchers, including through flexible hiring authorities such as the Intergovernmental Personnel Act; and Refine program performance measures, in collaboration with program managers and the Performance Improvement Officer.” In addition, 4 of 11 agencies with a central office responsible for evaluation reported that this office started conducting evaluations after 2010. One agency added both a central leader and a central office in 2013; 3 others just added a central office. Of the 12 agencies that reported having evaluation offices in their major components, most existed before GPRAMA was enacted, but 5 agencies have established new component evaluation offices since then. Presumably in response to greater administration attention to program evaluation, half the agencies reported that efforts to improve their capacity to conduct credible evaluations had increased at least somewhat since GPRAMA was enacted in January 2011. About half the PIOs reported increases in staff participation in evaluation conferences and knowledge sharing forums, hiring staff with research and analysis expertise, training staff in research and evaluation skills, and consultation with external research and evaluation specialists. Nine agencies reported increases in all these activities. Most of the remaining agencies reported no change in training or consultation with specialists (4 to 5), or decreases in hiring or participating in conferences (4 to 5) in this period. These decreases may reflect federal budget constraints and the general decline in federal hiring in recent years. In line with the increases reported in capacity building activities and organizational resources, about half the agencies reported that their use of evaluation as supportive evidence had increased at least somewhat since 2010 (only a few reported great increases). About half the PIOs reported that the use of evaluation had increased for implementing changes in program management or performance, designing or supporting program reforms, sharing what works or other lessons learned with others, allocating resources within a program, or supporting program budget requests. The rest reported that their use of evaluation evidence remained about the same in this period, with none reporting a decline in use of evaluation as evidence. Eight agencies reported increased use in all these activities, and an equal number reported that their use remained the same on all. Since, in a separate question, 5 agencies either provided no opinion or reported little or no current use of evaluation evidence to support budget, policy, or program management, we conclude that this group has continued to make little or no use of evaluations since 2010. Our survey asked the PIOs how useful various activities or resources were for improving their agency’s capacity to conduct credible evaluations. Several PIOs did not answer these questions, in part because they were not familiar with such activities. Many of those who did respond found that hiring, professional networking, consulting with experts, and training as well as some of the GPRAMA accountability provisions were very useful for improving capacity to conduct evaluations. Our survey also asked about the usefulness of various activities or resources for improving an agency’s capacity to use evaluations in decision making. Again, several agencies did not respond, but most of those that did reported that engaging program staff, conducting quarterly progress reviews, and holding goal leaders accountable for progress on agency priority goals were very useful in improving agency capacity to make use of evaluation information. Some other GPRAMA-related activities were not found as useful for enhancing evaluation use. In addition, agencies had not taken full advantage of available technology to disseminate evaluation results, thus potentially limiting their influence on decision making. Our survey asked the PIOs about the usefulness of 14 different actions or resources for improving their capacity to conduct evaluations, drawn from the literature and some GPRAMA provisions related to building agency capacity. About a third of the respondents indicated either that they had no opinion or did not respond to these questions, similar to the number not responding or reporting no change in the use of capacity-building activities since 2010. About two-thirds of agencies (15) reported hiring staff with research and analysis expertise, and 11—nearly half of the PIOs—thought it was very useful for improving agency capacity to conduct credible evaluations. Almost half the agencies used special hiring authorities, such as the Presidential Management Fellows, Intergovernmental Personnel Act, or American Association for the Advancement of Science (AAAS) fellows program, and generally found them useful for improving agency evaluation capacity. Other agency- specific means of obtaining staff were mentioned in comments—for example, an Evaluation Fellowship Program at the Centers for Disease Control and Prevention. Figure 4 summarizes agencies’ reports on the usefulness of the full range of activities and resources posed for building capacity to conduct evaluations. The PIO survey respondents also gave high marks to professional networking for building staff capacity. Two-thirds of the PIOs reported that staff participation in professional conferences or evaluation interest groups for knowledge sharing was useful, with 9 PIOs citing these activities as very useful in improving agency capacity to conduct credible evaluations. Examples mentioned included the Association for Public Policy Analysis and Management research conference and an Evaluation Day conference sponsored by the U.S. Department of Health and Human Services (HHS) Office of the Assistant Secretary for Planning and Evaluation. The exchange of evaluation tips and leading practices through the PIC or other network was considered moderately useful for capacity building by a third of the PIOs. PIOs provided examples of information-sharing networks besides the PIC, such as OMB’s Evaluation Working Group, which holds governmentwide meetings on government performance topics; Federal Evaluators, an informal association of evaluation officials across government; Washington Evaluators, a local affiliate of the American Evaluation Association; and the National Academy of Public Administration. Some agencies have established informal networks to share information internally, such as HHS and the U.S. Department of Labor. Also mentioned were communities of practice that engage both public and private sectors but are focused on a specific domain—for example, the Organisation for Economic Co-operation and Development’s EvalNet, which focuses on international development, and the Environmental Evaluators Network. Consultation with external experts for conceptual or technical support was rated as very useful for improving the capacity to conduct evaluations by most using it (9 of 15). However, this did not apply to other forms of external consultation. Seven agencies reported having an annual or multi- year evaluation agenda, and 3 of them reported consulting with congressional or other external stakeholders on their plan. These 3 found consultation useful to varying degrees for building their agency’s evaluation capacity to conduct evaluation. Training in specific skills and knowledge—for example, types of evidence, assessing evidence quality, report writing, and communication—is frequently cited in the evaluation literature as a way to build organizational or individual evaluation capacity. Besides asking about participating in professional conferences and networks, our survey asked about the usefulness of training in evaluation skills—for example, describing program logic models, choosing appropriate evaluation designs, and collecting and analyzing data. Half the agencies reported engaging in internal or external training—whether delivered in a classroom, online, or in webinars. Half the agencies using internal training reported that it was very useful for improving capacity to conduct credible evaluations. PIOs who reported on agency experience with external evaluation training were less enthusiastic, but still considered the training useful for developing evaluation skills overall. OMB, in addition to encouraging agencies to conduct evaluations through guidance, sponsored a number of governmentwide open forums on performance issues. About half the PIOs reported a range of opinions on the usefulness of the OMB forums on the Paperwork Reduction Act, procurement, data sharing, and related rules and procedures to help improve agency capacity for conducting credible evaluations. Nevertheless, 7 or more of the agencies identified training or guidance in several skills as still needed to a great or very great extent to improve their agencies’ capacity to conduct credible evaluations. These skills included: translating evaluation results into actionable recommendations—a requirement for getting evaluation results used— data management and statistical analysis, and performance measurement and monitoring. Few reported that more training in research design and methods or subject matter expertise was greatly needed. Our survey asked what other types of training or guidance might be needed to improve agency capacity. A few PIOs commented that training is needed in preparing statements of work for evaluation contracts, data analytics and visualization of information, and learning how to effectively use evidence and evaluation information. Our survey asked about several activities and resources related to GPRAMA provisions linked to creating an enabling environment for agency evaluation capacity. Majorities of PIOs stated that conducting quarterly progress reviews on their priority goals, and holding goal leaders accountable for progress on those goals, were moderately to very useful in improving their agency’s ability to conduct credible evaluations. In response to GPRAMA provisions to improve agency performance management capacity, the PIC and OPM developed a Performance Analyst position design, recruitment, and selection toolkit to assist agencies’ hiring. Seven PIOs reported that their agencies used the toolkit, and 3 did not find it useful for building agency evaluation capacity. About a third of the PIOs reported that their agencies made an effort to incorporate the core competencies that OPM identified for performance management staff into internal agency training. However, 2 of the 7 agencies did not find the effort useful for improving staff evaluation capacity. The competencies primarily address general management skills and define planning and evaluating fairly simply—as setting and monitoring progress on performance goals—so they do not address some of the specific analytic skills PIOs reported were still needed for conducting evaluations. GAO previously recommended that OPM, in coordination with the PIC and the Chief Learning Officer Council, identify performance management competency areas needing improvement and work with agencies to share information about available agency training in those areas. OPM agreed with those recommendations and has embarked on a 2-year pilot program to test how to build capacity in several mission critical competencies identified across government, such as strategic thinking, problem solving, and data analysis, to ensure that both program staff and management can use evaluation and analysis of program performance. OMB senior officials also engaged with agency officials on the Performance Improvement Council to collaborate on improving program performance. Eight of the 14 agencies that responded considered the exchange of evaluation tips and leading practices through the PIC or other networks as at least moderately useful for improving their evaluation capacity. For example, the PIC developed a guide to best practices for setting milestones and a guide and evaluation tool to help agencies set their agency priority goals. Previously, we found that experienced evaluators emphasized three basic strategies to facilitate evaluation’s influence on program management and policy: demonstrate leadership support of evaluation for accountability and program improvement, build a strong body of evidence, and engage stakeholders throughout the evaluation process. Accordingly, our survey asked the PIOs how useful various activities or resources were for improving their agency’s capacity to use evaluations in decision making. Several did not answer these questions because they did not use the particular activity or resource or had no opinion. The PIOs who responded mainly cited engaging program staff, conducting quarterly progress reviews, and holding goal leaders accountable for progress on agency priority goals as very useful for improving agency capacity to make use of evaluation information in decision-making. Over two-thirds of the PIOs responded that involving program staff in planning and conducting evaluation studies was useful for improving agency use of evaluation; 11 saw it as very useful. Engaging staff throughout the process can gain their buy-in on the relevance and credibility of evaluation findings; providing program staff with interim results or lessons learned from early program implementation can help ensure timely data for program decisions. Majorities of PIOs affirmed that other forms of program staff engagement were also very useful: providing program staff and grantees with technical assistance on evaluation and its use and agency peer-to-peer presentations of evaluation studies to discuss methods and findings As mentioned earlier, majorities of PIOs viewed the new GPRAMA activities of conducting quarterly reviews and holding goal leaders accountable as moderately to very useful for improving agency capacity to conduct credible evaluations. Majorities of the responding PIOs also viewed those same activities as moderately to very useful for improving agency capacity to use evaluations in decision making. However, another GPRAMA provision—coordinating with OMB and other agencies to review progress on cross-agency priority (CAP) goals—met with a range of opinions. Equal numbers reported that it was moderately to very useful, somewhat useful, or not useful at all for improving an agency’s use of evaluation. Because the 14 CAP goals for this period cover 5 general management improvement and 9 cross-cutting but specific policy areas, some of the 24 PIOs may have been more involved than others in those reviews. Other activities potentially useful for improving the capacity to use information from evaluations rely on leveraging resources. A third of the PIOs reported that exchanging leading practices, tips, and tools for using evidence to improve program or agency performance through the PIC or other network was moderately or very useful in improving agency capacity to use evaluation results in decision making. Many of the same networks named as helping to improve their capacity to conduct credible evaluations were also named with regard to improving capacity to use evaluations in decision making. These included the Environmental Evaluators Network, Federal Evaluators, the National Academy of Public Administration, and the OMB Evaluation Working Group. Seven agencies reported having an agency-wide annual or multi-year evaluation plan or agenda of planned studies, and 6 PIOs reported consulting with congressional and other external stakeholders on that plan. However, these consultations were not viewed as useful for improving their agency’s capacity to use evaluations in decision making. The absence of consultation may miss an opportunity to ensure that evaluations will address the questions of greatest interest to congressional decision makers and will be perceived as credible support for proposed policy or budget changes. In previous work, we found that dialog between congressional committees and executive branch agencies was necessary to achieve a mutual understanding that would allow agencies to provide useful information for oversight. Previously we found that a key strategy for promoting the use of evaluation findings was to make them digestible and usable and to proactively disseminate them. Our survey posed various options that agencies could take to publicly disseminate their evaluation findings. Half the respondents reported posting evaluation reports in a searchable database on their websites, and half of them viewed this practice as moderately to very useful for improving their agency’s capacity to use evaluations in decision making. However, 3 did not find the practice useful. Electronic mailing lists are more proactive than posting a report to a website and permit tailoring the message to different audiences. A third of all respondents disseminated evaluation reports by electronic mailing lists, which most saw as somewhat to very useful for facilitating the use of evaluations in decision making. Tailoring messages for particular audiences—for example, federal policy makers, state and local agencies, and local program affiliates—may, however, increase the applicability and use of evaluation findings by these other audiences. GPRAMA requires OMB to provide quarterly updates on agency and cross-agency priority goals on a central, government-wide website, Performance.gov, to make federal program and performance information more accessible to the Congress and the public. In our survey, PIO reviews were mixed about the utility of this website to improve agency capacity to use evaluations in decision making. Almost half the agencies found the practice somewhat to moderately useful for improving the agencies’ use of evaluation findings in decision making, but one-fourth of the agencies did not. In 2013, GAO reviewed Performance.gov and recommended that OMB work with the General Services Administration and the PIC to clarify specific ways that intended audiences could use the website and specify changes to support these uses. OMB staff agreed with our recommendations, and Performance.gov continues to evolve. Currently each agency has a home page that provides links to the agency’s strategic plan, annual performance plans and reports, and other progress reviews. Data.gov is a federal government website that provides descriptions of datasets generated or held by the federal government in order to increase the ability of the public to locate, download, and use those datasets. A third of the PIOs reported that sharing databases in public repositories such as Data.gov for researchers and the public to use helped in improving agency capacity to use evaluations in decision making, but 1 thought it was not useful. However, a third of PIOs stated that the agency did not use this vehicle. Vehicles such as Data.gov and Performance.gov are primarily intended to improve government transparency and expand information’s use by the Congress and the public, but they can also help support agency requests for budget and policy changes to improve government performance. Although OMB and several agencies have taken steps since 2010 to expand federal evaluation efforts, most agencies demonstrate rather modest evaluation capacity. Those with centralized evaluation authority reported greater evaluation coverage and use in decision making, but additional effort will be required to expand agencies’ evaluation capacity beyond those that already possess evaluation expertise. In addition to hiring and training staff and consulting experts, promoting information sharing through informal and formal evaluation professionals’ networks offers promise for building agencies’ capacity to conduct evaluation in a constrained budget environment. Engaging program staff, regularly reviewing progress on agency priority goals, and holding goal leaders accountable can help build agency use of evaluation in decision making, as our survey results show. While timely, public dissemination of performance and evaluation results may not directly influence agency decision making, it is important to support government transparency and accountability for results to the Congress and the public. Directly engaging intended users (for example, involving program staff in planning and conducting evaluations and holding regular progress reviews) was strongly associated with increasing evaluation use in internal agency decision making. In contrast, few agencies reported consulting congressional and other external stakeholders in conducting their evaluation studies or developing their evaluation agendas. However, some program reforms require program partners and legislators to take action. Engaging congressional and other stakeholders in evaluation planning might increase their interest in evaluation as well as their adoption of evaluation findings and recommendations. In the absence of explicit authority or congressional request, agencies may be reluctant to spend increasingly scarce funds on evaluation studies that are perceived as resource intensive. A stable source of evaluation funding could help maintain a viable evaluation program that produced a steady stream of information to guide program management and policy making. Even so, only a quarter of the agencies in our survey reported that their evaluation offices had a stable source of funding. Congressional appropriators could direct the use of program or agency funds for evaluating federal programs and policies. As we have noted before, congressional committees can also communicate their interest in evaluation in a variety of ways to encourage agencies to produce credible, relevant studies that inform decision making: consult with agencies on proposed revisions to their strategic plans and priority goals, as GPRAMA requires them to do every 2 years, to ensure that agency missions are focused, goals are specific and results-oriented, and strategies and funding expectations are appropriate and reasonable; request agency evaluations to address specific questions about the implementation and results of major program or policy reforms, in time to consider their results in program reauthorization; and review agencies’ annual evaluation plans or agendas to ensure that they address issues that will inform budgeting, reauthorization, and ongoing program management. We requested comments on a draft of this report from the Director of the Office of Management and Budget, whose staff provided technical comments that we incorporated as appropriate, and from the Director of the Office of Personnel Management, who provided none. We are sending copies of this report to other interested congressional committees, and the Director of the Office of Management and Budget and the Director of the Office of Personnel Management. In addition, the report will be available on our web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2700 or by e-mail at kingsburyn@gao.gov. Contacts for our Office of Congressional Relations and Office of Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. We administered a web-based questionnaire from May 2, 2014, to June 19, 2014, on federal agency evaluation capacity resources and activities to the Performance Improvement Officers (PIO) or their deputies at the 24 agencies covered by the Chief Financial Officers Act of 1990 (CFO act). We received responses from all 24 agencies (listed at the end of this appendix.) The survey gave us information about agencies’ evaluation resources, policies, and activities, and the activities and resources they have found useful in building their evaluation capacity. (The survey questions and summarized results are in appendix II.) We sent respondents an e-mail invitation to complete the survey on a secure GAO web server. Each e-mail contained a unique username and password. During the data collection period, we sent follow-up e-mails and, if necessary, called nonresponding agencies on the telephone. Because this was not a sample survey, it has no sampling errors. In practice, however, any survey may introduce nonsampling errors that stem from differences in how a particular question is interpreted, the availability of sources of information, or how the survey data are analyzed. All can introduce unwanted variability into the survey results. We took a number of steps to minimize these nonsampling errors. A social science survey specialist designed the questionnaire, in collaboration with our staff who had subject matter expertise. In addition, we pretested the questionnaire in person with PIOs at three federal agencies to make sure that the questions were relevant, clearly stated, easy to comprehend, and unbiased. We also affirmed that data and information the PIOs would need to answer the survey were readily obtainable and that answering the questionnaire did not place an undue burden on them. Additionally, a senior methodologist within our agency independently reviewed a draft of the questionnaire before we administered it. We made appropriate revisions to its contents and format after the pretests and independent review. When we analyzed data from the completed survey, an independent analyst reviewed all computer programs used in our analysis. Since this was a web-based survey, respondents entered their answers directly into the electronic questionnaire; thus, we did not key the data into a database, avoiding data entry errors. Additionally, in reviewing the agencies’ answers, we confirmed that the PIOs had correctly bypassed inapplicable questions (such as questions we expected them to skip). We concluded from our review that the survey data were sufficiently reliable for the purposes of this report. The 24 agencies subject to the CFO Act include Agency for International Development Department of Agriculture Department of Commerce Department of Defense Department of Education Department of Energy Department of Health and Human Services Department of Homeland Security Department of Housing and Urban Development Department of the Interior Department of Justice Department of Labor Department of State Department of Transportation Department of the Treasury Department of Veterans Affairs Environmental Protection Agency General Services Administration National Aeronautics and Space Administration National Science Foundation Nuclear Regulatory Commission Office of Personnel Management Small Business Administration Social Security Administration. In addition to the contact named above, Stephanie Shipman (Assistant Director), Thomas Beall, Valerie Caracelli, Timothy Carr, Joanna Chan, Stuart Kaufman, and Penny Pickett made key contributions to this report. Administration for Children and Families. Evaluation Policy. Washington, D.C.: Department of Health and Human Services, November 2012. Accessed September 24, 2014. http://www.acf.hhs.gov/programs/opre/resource/acf-evaluation-policy. America Achieves. “Investing in What Works Index: Better Results for Young People, Their Families, and Communities.” Results for America, Washington, D.C., May 2014. Accessed September 11, 2014. http://www.InvestInWhatWorks.org/policy-hub. American Evaluation Association. An Evaluation Roadmap for a More Effective Government. N.p.: Revised October 2013. Accessed September 22, 2014. http://www.eval.org/d/do/472. Auditor General of Canada. 2013 Spring Report of the Auditor General of Canada. Ch. 1. “Status Report on Evaluating the Effectiveness of Programs.” Ottawa: 2013. Accessed September 15, 2014. http://www.oag- bvg.gc.ca/internet/English/parl_oag_201304_01_e_38186.html. Bourgeois, Isabelle, and J. Bradley Cousins. “Understanding Dimensions of Organizational Evaluation Capacity,” American Journal of Evaluation, 34:3 (2013): 299—319. Chapel, Thomas. “Building and Sustaining Evaluation Capacity in a Diverse Federal Agency.” Paper presented at Federal Evaluators Conference, Washington, D.C., November 1, 2012. Accessed September 11, 2014. http://www.fedeval.net/presen.htm. Clapp-Wincek, Cindy. “The Complexity of Building Capacity at USAID.” Paper presented at Federal Evaluators Conference, Washington, D.C., November 1, 2012. Accessed September 11, 2014. http://www.fedeval.net/presen.htm. Cousins, J. Bradley, Swee C. Goh, Catherine J. Elliott, and Isabelle Bourgeois. “Framing the Capacity to Do and Use Evaluation,” New Directions for Evaluation, 133 (Spring 2014): 7—24. Dawes, Katherine. “Program Evaluation at EPA.” Paper presented at Federal Evaluators Conference, Washington, D.C., November 1, 2012. Accessed September 11, 2014. http://www.fedeval.net/presen.htm. Goldman, Ian. “Developing a National Evaluation System in South Africa,” eVALUatiOn Matters: A quarterly knowledge publication of the African Development Bank, 2(3) (September 2013): 42—49. Labin, Susan N., Jennifer L. Duffy, Duncan C. Meyers, Abraham Wandersman, and Catherine A. Lesesne. “A Research Synthesis of the Evaluation Capacity Building Literature,” American Journal of Evaluation, 33:307 (2012). National Audit Office. Cross-Government: Evaluation in Government. Report by the National Audit Office. London, Eng. December 2013. Accessed September 24, 2014. www.nao.org.uk. Partnership for Public Service and Grant Thornton. A Critical Role at a Critical Time: A Survey of Performance Improvement Officers. Washington, D.C.: April 2011. Accessed September 16, 2014. http://ourpublicservice.org/OPS/publications/viewcontentdetails.php?id=1 60. Partnership for Public Service and Grant Thornton. Taking Measure: Moving from Process to Practice in Performance Management. Washington, D.C.: September 2013. Accessed September 16, 2014. http://ourpublicservice.org/OPS/publications/viewcontentdetails.php?id=2 32. Partnership for Public Service and IBM Center for the Business of Government. From Data to Decisions III: Lessons from Early Analytics Programs. Washington, D.C.: November 2013. Accessed September 16, 2014. http://ourpublicservice.org/OPS/publications/viewcontentdetails.php?id=2 33. Pew Charitable Trusts and MacArthur Foundation. States’ Use of Cost- Benefit Analysis: Improving Results for Taxpayers. Philadelphia: Pew- MacArthur Results First Initiative, July 29, 2013. Accessed October 31, 2014. http://www.pewtrusts.org/en/research-and- analysis/reports/2013/07/29/states-use-of-costbenefit-analysis. Rist, Ray C., Marie-Helene Boily, and Frederic Martin. Influencing Change: Building Evaluation Capacity to Strengthen Governance. Washington, D.C.: The World Bank, 2011. Accessed September 24, 2014. https://openknowledge.worldbank.org/ Segone, Marco, Caroline Heider, Riitta Oksanen, Soma de Silva, and Belen Sanz. “Towards a Shared Framework for National Evaluation Capacity Development,” eVALUatiOn Matters: A Quarterly Knowledge Publication of the African Development Bank, 2(3) (September 2013): 7— 25. Segone, Marco, and Jim Rugh (eds.). Evaluation and Civil Society: Stakeholders’ Perspectives on National Evaluation Capacity Development. New York: UNICEF, EvalPartners, IOCE, 2013. Accessed September 24, 2014. http://www.mymande.org/Evaluation_and_Civil_Society. Treasury Board of Canada. 2011 Annual Report on the Health of the Evaluation Function. Ottawa: 2012. Accessed September 24, 2014. http://www.tbs-sct.gc.ca/report/orp/2012/arhef-raefetb-eng.asp. United Nations Evaluation Group. National Evaluation Capacity Development: Practical Tips on How to Strengthen National Evaluation Systems. A report for the United Nations Evaluation Group Task Force on National Evaluation Capacity Development. New York: 2012. Accessed September 24, 2014. www.uneval.org/document/detail/1205. U.S. Agency for International Development. Evaluation: Learning from Experience. USAID Evaluation Policy. Washington, D.C.: January 2011. Accessed September 25, 2014. http://www.usaid.gov/evaluation. U.S. Department of Health and Human Services, Centers for Disease Control and Prevention. Improving the Use of Program Evaluation for Maximum Health Impact: Guidelines and Recommendations. Atlanta: November 2012. Accessed September 24, 2014. http://www.cdc.gov/eval. U.S. Department of Labor. U.S. Department of Labor Evaluation Policy. Washington, D.C.: November 2013. Accessed September 25, 2014. http://www.dol.gov/asp/evaluation/EvaluationPolicy.htm. U.S. Department of State. Department of State Program Evaluation Policy. Washington, D.C.: February 23, 2012. Accessed September 24, 2014. http://www.state.gov/s/d/rm/rls/evaluation/. Managing for Results: Agencies’ Trends in the Use of Performance Information to Make Decisions. GAO-14-747. Washington, D.C.: September 26, 2014. Managing for Results: Enhanced Goal Leader Accountability and Collaboration Could Further Improve Agency Performance. GAO-14-639. Washington, D.C.: July 22, 2014. Managing for Results: OMB Should Strengthen Reviews of Cross-Agency Goals. GAO-14-526. Washington, D.C.: June 10, 2014. Education Research: Further Improvements Needed to Ensure Relevance and Assess Dissemination Efforts. GAO-14-8. Washington, D.C.: December 5, 2013. Managing for Results: Executive Branch Should More Fully Implement the GPRA Modernization Act to Address Pressing Governance Challenges. GAO-13-518. Washington, D.C.: June 26, 2013. Program Evaluation: Strategies to Facilitate Agencies’ Use of Evaluation in Program Management and Policy Making. GAO-13-570. Washington, D.C.: June 26, 2013. Managing for Results: Leading Practices Should Guide the Continued Development of Performance.gov. GAO-13-517. Washington, D.C.: June 6, 2013. Managing for Results: Agencies Have Elevated Performance Management Leadership Roles, but Additional Training Is Needed. GAO-13-356. Washington, D.C.: April 16, 2013. Managing for Results: Data-Driven Performance Reviews Show Promise but Agencies Should Explore How to Involve Other Relevant Agencies. GAO-13-228. Washington, D.C.: February 27, 2013. Managing for Results: A Guide for Using the GPRA Modernization Act to Help Inform Congressional Decision Making. GAO-12-621SP. Washington, D.C.: June 15, 2012. President’s Emergency Plan for AIDS Relief: Agencies Can Enhance Evaluation Quality, Planning and Dissemination. GAO-12-673. Washington, D.C.: May 31, 2012. Designing Evaluations: 2012 Revision. GAO-12-208G. Washington, D.C.: January 2012. Employment and Training Administration: More Actions Needed to Improve Transparency and Accountability of Its Research Program. GAO-11-285. Washington, D.C.: March 15, 2011. Program Evaluation: Experienced Agencies Follow a Similar Model for Prioritizing Research. GAO-11-176. Washington, D.C.: January 14, 2011. Employment and Training Administration: Increased Authority and Accountability Could Improve Research Program. GAO-10-243. Washington, D.C.: January 29, 2010. Program Evaluation: A Variety of Rigorous Methods Can Help Identify Effective Interventions. GAO-10-30. Washington, D.C.: November 23, 2009. Program Evaluation: An Evaluation Culture and Collaborative Partnerships Help Build Agency Capacity. GAO-03-454. Washington, D.C.: May 2, 2003. Program Evaluation: Improving the Flow of Information to Congress. GAO/PEMD-95-1. Washington, D.C.: January 30, 1995.
To improve federal government performance and accountability, GPRAMA aims to ensure that agencies use performance information in decision making and holds them accountable for achieving results. The Office of Management and Budget (OMB) has encouraged agencies to strengthen their program evaluations– systematic studies of program performance–and expand their use in management and policy making. This report is one of a series in which GAO, as required by GPRAMA, examines the act's implementation. GAO examined federal agencies' capacity to conduct and use program evaluations and the activities and resources, including some related to GPRAMA, agencies found useful for building that capacity. GAO reviewed the literature to identify the key components and measures of evaluation capacity. GAO surveyed the PIOs of the 24 federal agencies subject to the Chief Financial Officers Act regarding their organizations' characteristics, expertise, and policies, and their observations on the usefulness of various resources and activities for building evaluation capacity. All 24 responded. GAO also interviewed OMB and Office of Personnel Management (OPM) staff about their capacity-building efforts. In a governmentwide survey of agency Performance Improvement Officers (PIO), GAO found uneven levels of evaluation expertise, organizational support within and outside the organization, and use across the government. The Government Performance and Results Act of 1993 (GPRA) is a key component of the enabling environment for federal evaluation capacity, having established a solid foundation of agency performance reporting and leadership commitment to using evidence in decision making. However, only half the agencies reported congressional interest in or requests for program evaluation studies. Eleven of the 24 agencies reported committing resources to obtain evaluations by establishing a central office responsible for evaluation of agency programs, operations, or projects, although only half these offices were reported to have a stable source of funding. Seven agencies reported having a high-level official responsible for oversight of evaluation. A quarter of agencies reported having agency-wide policies or guidance concerning key issues in study design, evaluator independence and objectivity, report transparency, or implementing findings. Two-thirds of the agencies reported evaluation coverage of less than half their performance goals. Over a third reported using evaluations to a moderate or greater extent as evidence in support of budget or policy changes or program management. Those agencies with centralized evaluation authority reported greater evaluation coverage and use of the results in decision making. Since the GPRA Modernization Act of 2010 (GPRAMA) was passed, 2 to 4 agencies established a central evaluation office or leader. Half the agencies reported increased efforts to improve their evaluation capacity through hiring, training, conference participation, and consulting experts, but 4 to 5 reported declines in hiring and conference participation. About half reported increased use of evaluations as supporting evidence for management and policy decisions. About a quarter of PIOs were not familiar with their agencies' various capacity- building activities but many of those that did respond rated hiring, professional networking, consulting with experts, reviewing progress on priority goals, and holding goal leaders accountable under GPRAMA most useful for building capacity to conduct evaluations. They rated engaging program staff in evaluation design, conduct, and reporting, and the GPRAMA priority goal review and accountability provisions most useful for building capacity to use evaluation. Based on our survey results, GAO observes that Promoting information sharing in professional networks and engaging program managers and staff in evaluation studies and priority goal reviews offer promise for building capacity in a constrained budget environment. Engaging congressional and other stakeholders in evaluation planning might increase their interest in and adoption of evaluation recommendations. Congressional committees can communicate their interest in evaluation by consulting with agencies on their strategic plans and priority goals, reviewing agency annual evaluation plans to ensure they address issues that will inform congressional decision making, and requesting evaluations to address specific questions of interest. GAO is not making recommendations. OMB staff provided technical comments on a draft of this report that were incorporated as appropriate. OPM provided no comments.
The Clean Water Act prohibits the discharge of oil and hazardous substances into or upon U.S. navigable waters or adjoining shorelines and directs the President to issue regulations establishing procedures, methods, and equipment requirements to prevent such discharges. The President subsequently delegated this responsibility to EPA. In 1973, to meet this responsibility as it relates to oil discharges, EPA issued the Oil Pollution Prevention Regulation—also referred to as the SPCC rule— which outlined the actions oil storage facilities that store greater than certain quantities of oil must take to prevent, prepare for, and respond to oil spills before they reach U.S. navigable waters or adjoining shorelines. In 1974, the SPCC rule took effect and EPA initiated the SPCC program. Under this program, regulated facilities must implement procedures and methods and have certain equipment to prevent oil discharges from reaching U.S. navigable waters and adjoining shorelines. SPCC requires facilities to prepare oil spill prevention plans that spell out (1) design, operation, and maintenance procedures to prevent spills from occurring and (2) countermeasures to control, contain, clean up, and mitigate the effects of an oil spill. In 1994, in response to directives in the Oil Pollution Act of 1990—which amended the Clean Water Act—EPA established specific requirements for a subclass of SPCC facilities, including that these facilities develop and implement Facility Response Plans (FRP). According to EPA, there are about 4,100 FRP facilities nationwide—less than 1 percent of the estimated SPCC-regulated facilities. FRP facilities are those that, because of their location, could reasonably be expected to cause substantial harm to the environment by discharging oil into or on U.S. navigable waters, adjoining shorelines, or the exclusive economic zone. Under EPA regulations, facilities are considered FRP facilities if they have (1) 42,000 gallons or more of oil storage capacity and transfer oil over water or (2) 1 million gallons or more of oil storage capacity and meet other specific criteria, such as risking injury to sensitive environments or the shutting down of public drinking water intake. Owing to the higher risk they pose, FRP facilities are subject to more stringent rules and regulations than other SPCC facilities, primarily focusing on response preparedness. For example, FRP facilities must submit for EPA’s review and possible approval, plans that identify the individual having full authority to implement removal actions at the facility and the resources available to remove a discharge, and describe the training, testing, and response actions of persons at the facility, among other things. Even though FRP facilities are subject to more stringent requirements than other SPCC facilities, they are required to have SPCC plans and are also inspected through the SPCC program. In response to some major oil spills, our 1989 report, and similar findings by an EPA taskforce, the agency proposed revisions to the SPCC rule in 1991, 1993, and 1997 and finalized these amendments in 2002. These amendments made over 30 changes that EPA considers major to the SPCC rule, such as including new subparts outlining the requirements for various classes of oil; revising the applicability of the regulation; amending the requirements for completing SPCC plans; and strengthening tank integrity testing requirements, among other changes. The final rule also contained a number of provisions designed to decrease regulatory burden while preserving environmental protection. Since then, EPA in 2006, made several major changes to the SPCC rule to further reduce regulatory burden, including an amendment that allows certain smaller facilities, identified as “qualified facilities,” storing up to 10,000 gallons of oil, to prepare self-certified SPCC plans and in October 2007, proposed further changes to streamline the SPCC requirements to, among other things, reduce regulatory burden on industries such as farms and oil production facilities. The agency plans to make these changes final in late 2008. Although EPA amended the SPCC rule in 2002 and 2006, the new requirements have not taken effect because EPA extended the date by which facilities were to come into compliance with these revised requirements in 2003, 2004, 2006, and 2007. That is, owners and operators of facilities operating on or before August 16, 2002, must continue to maintain their SPCC plans based on current SPCC requirements and then must amend them to ensure compliance with the amended requirements by July 1, 2009. Facilities beginning operations after August 16, 2002, have until July 1, 2009, to prepare and implement a plan. EPA made this latest extension to, among other things, give owners and operators of facilities the time to fully understand the 2002 and 2006 amendments and the further revisions that are planned for implementation in 2008, and to make changes to their facilities and plans. We reported on the reasonableness of the economic analyses EPA performed in support of the 2002 and 2006 amendments to the SPCC rule in July 2007. We found that the economic analysis of the 2002 amendments had several limitations that reduced its usefulness for assessing the amendments’ benefits and costs. We also found that although EPA’s economic analysis of the 2006 amendments addressed several of the 2002 limitations, it also had some limitations that reduced its usefulness for assessing the amendments’ benefits and costs. EPA delegates implementation of the program to its 10 regional offices, which carry out inspection programs to ensure that the facilities are in compliance with the SPCC regulations. Figure 1 shows the locations of EPA’s 10 regions. When EPA inspects a facility, it typically sends one or more inspectors from the region to the facility. These visits generally begin with a list of questions about the facility, such as confirming that the facility meets the criteria for the SPCC rule and asking whether it has an SPCC plan. The inspectors will then review the plan to see if it contains information required under the SPCC rule, including facility diagrams, training of employees, security measures, containment structures, and records of facility inspection and tests. The inspectors then tour the facility and examine how the plan is being implemented by, for example, inspecting equipment and taking notes and photographs. After the inspection, a compliance determination that completes the inspection process for that facility is made unless observed noncompliance warrants another fact- finding inspection. Before informing facility owners or operators of any violations found, inspectors may discuss their observations with supervisors and enforcement and compliance staff to determine what actions to take. This process generally takes several weeks, but can take up to several months, depending on the severity of the violations. Determining whether a penalty is appropriate or determining appropriate penalties for violations depends, among other things, on the seriousness of the violation, the economic benefit to the facility owner or operator resulting from the violation, the degree of the facility owner’s or operator’s culpability in the violation, and any history of violations at the facility. When a violation is found, EPA may send a “notice of deficiency,” “letter of violation,” or similar notice to the owner or operator. The owner or operator could also receive an Expedited Settlement Agreement (ESA) offer to settle the violations by paying a penalty between $500 and $2,500 and promptly correcting any violations found. Finally, EPA could seek the issuance of an administrative penalty order against the owner or operator, or submit a judicial referral for penalties to the Department of Justice. Typically, the investigation is considered closed when, in cases where there is a deficiency but not a penalty, corrective actions are taken, or when a penalty is issued, when the penalty payment is received and corrective action is performed. EPA headquarters annually determines how funds for implementing the Oil Program are allocated to regional offices. The budget allocation for the Oil Program combines funds for oil spill prevention (SPCC), preparedness (FRP and area contingency planning), and response infrastructure. As shown in table 1, the total operational budget allocated for EPA Oil Program activities in fiscal year 2006 was $12 million and, in fiscal year 2007, $12.3 million. In fiscal year 2006, EPA allocated between 5 and 10 percent of the total operational budget for Oil Program activities to each EPA regional office. In fiscal year 2007, EPA’s allocation for Oil Program activities to each EPA regional office ranged between 5 and 9 percent. EPA regional offices determine how they will use the allocated funds to implement the SPCC program in their regions, including how they will manage inspection and enforcement activities. According to EPA headquarters and regional officials, most funds for oil spill response come out of another fund—the Oil Spill Liability Trust Fund—which is managed by the U.S. Coast Guard. Although EPA receives some funding from the emergency response portion of the Oil Spill Liability Trust Fund for response activities, there are no funds provided for additional staff to conduct inspection activities. The staff that perform other oil spill activities, including SPCC inspections, also conduct response activities. Thus, when there is a high level of response activity, there may be an impact on prevention and preparedness activities, including the number of SPCC inspections. In our 1989 report, we made several recommendations to EPA’s Administrator to strengthen SPCC regulations and the program. Among other things, to strengthen SPCC regulations, we recommended that EPA require that (1) aboveground storage tanks be built and tested in accordance with industry or other specified standards, (2) facilities develop response plans for spilled oil beyond the facilities’ boundaries, and (3) storm water drainage systems be designed and operated to prevent oil from passing through them. EPA included provisions in the 1991 SPCC proposed amendments to implement the recommendations regarding tank integrity testing and storm water drainage systems and finalized these amendments in the 2002 rule. In 1994, EPA partially addressed our recommendation regarding submitting response plans when it began requiring FRP facilities to submit plans as required by the Oil Pollution Prevention Act of 1990. This act required the President to issue regulations for response plans for oil or hazardous substances for facilities that, because of their location, could reasonably be expected to cause substantial harm to the environment by discharging into or on U.S. navigable waters and adjoining shorelines, or the exclusive economic zone. EPA, however, did not require response plans from other SPCC facilities. Furthermore, our 1989 report recommended that EPA take the following four actions to improve its implementation and evaluation of the SPCC program: better define the training needs for the agency’s SPCC inspectors because each of EPA’s regions had established a training program for SPCC inspectors using different program styles, curricula, and manuals; develop instructions for performing and documenting inspections because EPA had not required the regions to follow uniform inspection or documentation procedures, allowing regions in many cases to let inspectors rely on their experience and knowledge; establish a national policy for fining violators because, in the absence of a policy, regions had adopted inconsistent polices and rarely assessed fines; and develop a system of inspection priorities, based on a national inventory of tanks, because, without knowing the location and number of facilities or tanks, EPA could not assess the relative risk of spills to the environment or target for inspections the facilities most in need of attention. In 1993, the Congress passed GPRA, requiring all federal agencies to (1) develop and submit strategic plans covering at least 5 years to the Congress and the Director of OMB, (2) set annual performance goals consistent with the goals and objectives in the strategic plans, and (3) annually compare actual program results with established performance goals and report this information to the Congress. Under the act, agencies are to prepare annual performance plans that articulate goals for the upcoming fiscal year that are aligned with their long-term strategic goals described in the strategic plans. These annual performance plans must include results-oriented annual goals linked to program activities and indicators that the agency will use to measure performance against the results-oriented goals. Performance measures are the yardsticks used to assess an agency’s success in meeting its performance goals. Over the last several years, EPA has allowed each regional office to implement the SPCC program in a manner that best fits its unique circumstances while also establishing national SPCC policies and procedures to promote consistent enforcement of SPCC regulations. EPA allows flexibility because the EPA regional offices often have different numbers and types of regulated facilities and staffing arrangements, and face different geographic challenges in implementing the SPCC program. Partly because of these regional differences, the number of facilities inspected and the level of enforcement taken have varied across regional offices in recent years. To promote consistency in how SPCC regulations are interpreted and enforced, while allowing for this variation, EPA has also developed a training curriculum for inspectors and guidance on how to conduct SPCC inspections and penalize violators. While EPA has budgeted similar amounts for each region’s SPCC activities in recent years, its regional offices may use varying staffing arrangements to conduct inspections. According to our survey, Regions 1 and 2 use only EPA regional employees for SPCC inspections, while other regions, such as Region 6, employ several contractors and EPA personnel to perform these inspections. In many regions, EPA on-scene coordinators, whose primary function is emergency response, also conduct SPCC inspections. In addition, some EPA regions employ their regional and contract staff full time on SPCC inspections, while other regions—such as Region 6—in addition to the personnel dedicated to SPCC inspections, have several employees who split their time between SPCC inspections and inspections for other EPA environmental regulations or programs. Furthermore, inspectors may differ in how they allocate their time. For example, according to Region 5 officials, their inspectors divide their time between enforcement activities and inspection activities, while Region 6 and 8 officials told us that they have separate offices and staff members to perform these activities. Table 2 shows the regions’ SPCC staffing and amount of time spent on SPCC-related activities in fiscal year 2006, such as planning for inspections, conducting outreach to facilities, visiting facilities, and documenting inspection results. EPA officials told us that some regional variation in staffing and time spent on SPCC-related activities is inevitable and necessary owing to different management structures, geographic size, and number and type of regulated facilities. For example, some regions, such as Region 8—which is responsible for Colorado, Montana, North Dakota, South Dakota, Utah, and Wyoming—must take into consideration significant travel costs, while Region 1—which is responsible for Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont—has much lower travel costs. In addition, EPA Region 6, which is considered to have a large portion of the nation’s oil business, uses several contractors or grantees as well as EPA inspectors to conduct inspections. Finally, Region 10 has unique travel challenges associated with remote facilities in Alaska and, in particular, North Slope operations. According to EPA officials, partly as a result of the differences in how the regions staff the SPCC program, and the travel issue associated with the geographical differences in the regions, the number of facilities each EPA region has inspected in recent years has varied. Also, the number of SPCC inspections may be affected when it is necessary for EPA regional staff’s time to be dedicated to unique response operations (such as Hurricane Katrina). Our survey shows that EPA’s regional offices inspected a total of 3,359 facilities for compliance with the SPCC rule from fiscal year 2004 through fiscal year 2006, or less than 1 percent of EPA’s estimate of the number of SPCC-regulated facilities in the United States. However, the number of facilities inspected in each EPA region varied in these years—from 184 in Region 10 to 745 in Region 6. (See fig. 2.) The percentage of facilities complying or not complying with SPCC regulations at the time of inspection also has varied across regional offices. For example, as shown in figure 3, in fiscal year 2006, the rate of facility noncompliance—measured as the percentage of inspected facilities found to be not fully complying with the requirements—ranged from a low of 26 percent in Region 3 to a high of 98 percent in Region 8, according to our survey. The average rate of facility noncompliance of inspected facilities across regional offices was about 59 percent. We also found regional differences in the extent of enforcement actions taken against inspected facilities—as measured by the percentage of noncompliant facilities that were subject to enforcement action in fiscal year 2006—from a low of zero for Regions 7 and 10 to a high of 84 percent in Region 6. According to EPA officials, these regional differences are due to various reasons, including how each EPA region has historically defined “compliance” and the types of enforcement actions each region uses. For example, some regions may use ESAs as an enforcement action more than others. ESAs allow EPA officials to negotiate compliance with facility owners without using traditional enforcement mechanisms. According to EPA, ESAs also use fewer EPA resources and promote quick settlements with violators. They can take between 30 and 60 days to complete, while traditional enforcement mechanisms can take years to settle, depending on the violation, the type of facility, and the extent of any court-ordered corrective actions. The shorter time frame allows those regions that use ESAs to conduct enforcement actions against a relatively large proportion of noncompliant facilities. Currently, all regions except Region 5 use ESAs to varying degrees, and that is a factor in the large variation in enforcement activities across regions. Region 5, which covers most of the Great Lakes states, does not use ESAs at all, because, according to Region 5 officials, they focus on taking enforcement actions against the more serious noncompliant cases that would result in larger Class II administrative penalties, rather than the less serious cases where they could use ESAs. EPA’s use of ESAs in recent years has outpaced the agency’s use of the more resource intensive traditional enforcement mechanisms. According to our survey, EPA regional offices concluded a total of 111 ESAs in fiscal year 2006, compared with 21 settlements using traditional enforcement mechanisms. EPA Regions 1, 3, 6, and 9 each issued ESAs in more than 25 percent of the cases in which inspectors found facilities were noncompliant. Together, these four regional offices concluded 97 of the 111 ESAs issued by all regional offices in fiscal year 2006, with Region 6 alone issuing 60 ESAs. In addition, according to officials in Region 6, they believe that their use of ESAs has resulted in increased compliance through word of mouth by regulated facilities about the mechanism. EPA headquarters officials stated that given that the SPCC requirements are performance-based, they continue to learn from and share information with the regions about alternative approaches to achieve facility compliance. While regional offices have flexibility in implementing the SPCC program to address factors unique to each region, EPA has taken steps over the last several years to promote consistency in how the regions interpret and enforce the SPCC regulations. As we reported in 1989, procedures for training SPCC inspectors, conducting inspections, and enforcing compliance varied across EPA regional offices. For example, in 1989 we found the following: Each EPA regional office had developed its own training program for SPCC inspectors using different styles, curricula, and manuals. As a result, SPCC inspectors were conducting inspections after meeting different training requirements, and had different levels of knowledge and skills. EPA had not required its regions to follow uniform procedures for conducting and documenting inspections, and had not developed written procedures on how to conduct inspections. EPA regional officials told us at the time that they relied on the experience and knowledge of individual inspectors rather than on written procedures. EPA did not have a uniform policy in place to determine the type of enforcement action, including penalties when enforcing SPCC regulations, and rarely used enforcement mechanisms. Some EPA regional officials had stated that the inspection itself, and the threat of possible penalties, was sufficient to bring the facilities into compliance. While we agreed that frequent inspections would promote compliance, we stated that greater compliance would most likely be achieved if penalties were assessed. As a result of these findings, we recommended that EPA define and implement minimum training needs for inspectors, develop instructions for performing and documenting inspections, and establish a national policy for penalizing violators of SPCC regulations. In response to these recommendations, EPA developed a new training protocol. It now requires inspectors to be trained both in the basic principles of inspections and in the conduct of SPCC inspections. EPA now requires 40 hours of specific SPCC/FRP classroom time that provide inspectors with information on the history and scope of the SPCC and FRP rules, relevant vocabulary, inspection requirements, tank integrity testing procedures, SPCC and FRP plan review, and enforcement issues and procedures. EPA requires all media inspection programs to train their inspectors in basic inspector training, health and safety training, and program-specific inspector training. The SPCC/FRP training program is standard across EPA, and is offered to new inspectors approximately every 6 months. EPA also offered 8- to 12-hour “short courses” that it designed to temporarily fulfill training requirements while it developed the 40-hour program training following the 2002 rule requirements. While these short courses are condensed versions of the 40-hour SPCC/FRP course, they were not intended to be long-term substitutes for the more extensive training sessions. In addition to classroom training, EPA requires on-the-job training, in which new inspectors shadow more experienced inspectors during site visits, training for inspection supervisors, and annual refresher courses. The on-the-job and refresher training is offered in the regions and nationally at certain training events. According to our survey, 73 percent of individuals who inspected at least one SPCC facility in 2006 met the full requirements for classroom training—that is, they had received basic inspector training in combination with the mandatory 40-hour SPCC/FRP-specific program. Figure 4 shows, for fiscal year 2006, the training level of inspectors who inspected at least one facility. In 2005, EPA issued national guidance to facilitate consistent understanding among regional inspectors on how to apply provisions of the SPCC rule. This guidance has been incorporated into the EPA national inspector training that EPA regional inspectors receive, and it is also available to owners and operators of facilities that may be subject to SPCC requirements. Inspectors use the inspection and plan review checklists included in the guidance as they inspect a facility to ensure that they conduct complete inspections. In visits to three SPCC facilities in three EPA regions, we found that inspectors were using the checklist. In addition, regional officials told us that the guidance provided them with information on how to enforce the SPCC regulations and also helped them in answering facility owners’ questions on compliance. OEM has also developed and presented specialized inspector training courses that address topics related to corrosion, integrity testing, and production sector operations. EPA further addressed our 1989 recommendations by issuing a national penalty policy for SPCC enforcement in 1998. Among other things, this EPA policy describes the penalties that EPA can collect through administrative and enforcement actions for SPCC violations and includes a minimum settlement penalty calculation, which generally describes what EPA would accept as a settlement. The policy also lays out a process that EPA enforcement officials can use to determine the level of seriousness of different SPCC violations and their associated penalties. EPA officials told us that EPA regions consistently use this tool—adjusted for inflation— when determining penalties. EPA’s ability to implement the SPCC program is limited by three factors. First, facilities subject to the SPCC rule are not required to identify themselves to EPA, and therefore EPA cannot effectively identify and target facilities for inspection and enforcement. Second, the national database EPA is creating to improve SPCC program management is limited to facilities that have already been inspected; consequently, the database will not enable program managers to better identify additional SPCC facilities. Finally, EPA cannot determine the extent to which the SPCC program is succeeding in its goal of preventing oil spills to U.S. navigable waters and adjoining shorelines because of the limited data and because EPA does not have performance measures to examine program effectiveness. Although EPA estimated in 2005 that more than 500,000 facilities nationwide could be subject to the SPCC rule, the actual number is unknown. According to EPA officials, none of the EPA regional offices have complete data for their jurisdictions on the number of potential SPCC-regulated facilities or tanks; their location, size, age, quality of construction; or method of operation. To address these data gaps, we recommended in 1989 that EPA develop a national inventory of all facilities under the program’s jurisdiction. We stated that a national inventory could gather the information necessary to assess the relative risks of spills and allow EPA to develop a system of inspection priorities, which would require national guidance on how to select facilities for inspection. While EPA did not directly act upon our recommendation, in 1991 it proposed a rule to require any facility subject to the SPCC rule to make itself known to the agency on a onetime basis, and subsequently sought OMB’s approval to collect data from all facilities that might be covered by the SPCC rule. However, as we noted in our 1995 report, OMB stated that EPA had not adequately justified the proposed reporting requirements and did not approve the request. EPA conducted a survey in 1995 to estimate the number and size of oil production and storage facilities in most industries regulated by the SPCC rule. Since then, EPA has updated its estimates of the number of facilities in the SPCC universe, but it still does not know the exact universe of facilities and their locations. In the preamble to the 2002 amendments to the SPCC rule, EPA explained that it had decided not to pursue the proposed notification requirement because it was still considering whether to establish a paper or an electronic notification system. EPA officials recently stated that the agency has still not fully considered a notification requirement. According to EPA officials, the agency also has not developed national guidance on how to target facilities for inspection, although it has crafted a framework in preparation for this guidance. EPA officials stated that the agency plans to develop this guidance, but it has not yet established a schedule for completing it. However, this guidance will not be based on an assessment of the relative risks of spills across all facilities because EPA does not have such information. Because EPA has incomplete information about which and how many facilities are subject to the SPCC rule, the regional offices attempt to identify SPCC facilities through a variety of indirect means and limited information sources. For example, according to our survey, 9 out of 10 EPA regional offices reported that they use oil spill data from the National Response Center (NRC) to identify regulated facilities and target them for inspection. NRC data track the incidence of oil spills as they are reported to NRC, but these data do not always associate spills with the specific facilities where they originated or include detailed information about those facilities. In addition, if any information is later collected on the actual source or facility responsible for an oil spill, NRC does not update its database. Consequently, NRC data generally can alert SPCC program officials to the possibility that SPCC facilities may be in the area of a reported spill rather than positively identifying any facilities as being subject to the rule. Nine out of 10 regional offices also reported using referrals from state agencies or other institutions to identify SPCC facilities and target inspections. For example, officials in EPA Region 3—which covers Delaware, the District of Columbia, Maryland, Pennsylvania, Virginia, and West Virginia—stated that, for the last 2 years, the region has requested all the states in its region to provide a list of facilities that the states would like inspected, and then incorporated these facilities into its inspection planning for the fiscal year. The region then submits the list of facilities to be inspected to the states for comment. According to an EPA official, the agency’s reliance on incomplete spill data and state referrals does not allow it to target facilities for inspection on the basis of their relative spill risks. EPA officials told us that, to a certain extent, the fact that they know about a facility at all—because of past spills and state referrals—can be an indication that the facility poses a relatively high risk. However, EPA does not have sufficient information to determine with any certainty how the risks posed by these facilities compare with those of other as yet unknown and not inspected SPCC- regulated facilities. EPA regions have also used other strategies to both identify SPCC facilities and target them for inspection. For example, Region 6 has developed its own geographic information system (GIS)—the On-Scene Coordinators Area Response System (OSCARS)—to identify facilities that may pose a high risk of spills into or upon navigable waters and adjoining shorelines. OSCARS provides regional inspectors with a graphics-based tool that integrates basic geographic information with separate location- based data sets, such as the location of lakes, waterways, and roads with industrial infrastructure, such as regulated facilities and pipelines. The output from this tool can be used by officials to help identify and pinpoint the location of facilities and prioritize their inspections based on potential risk and other criteria. EPA officials told us that OSCARS may have some outdated information because it is costly to update, but they considered this an acceptable limitation. According to Region 6 officials, OSCARS has allowed them to more effectively target problem sites and identify egregious regulation violators. They consider OSCARS of particular importance in Region 6, in which 116 counties contain a large proportion of the nation’s petroleum facilities and exhibit high-risk characteristics such as the potential to cause significant and substantial harm to the environment or public health if a large release into or upon navigable water occurs. In addition, according to Region 6 officials, all EPA regions have the capability to develop GIS systems for SPCC- and FRP-regulated facilities to respond promptly to an oil spill. Some other EPA regions use certain other criteria to conduct as many inspections as possible given resource constraints, as the following examples show: According to Region 5 and 8 officials, inspectors will visit a chosen location and inspect as many facilities as they can in that area within a week. Region 3 officials stated that although the region inspects facilities in all the states within their region, each fiscal year they perform more inspections in Delaware and Maryland than in Virginia because of the travel funding limitations. Several EPA regional officials stated that they try to identify and target additional facilities to inspect by, among other things, talking to the local population, consulting the Internet and local Yellow Pages, or through “drive-by sightings.” None of the data sources that regional offices consult when trying to identify and target facilities necessarily indicate that a facility is subject to SPCC regulations. Regional officials stated that SPCC inspectors sometimes identify and visit a facility, only to discover that the facility is either not subject to the SPCC rule or, if it is a facility established after 2002, will not be subject to the regulations until July 2009. EPA officials said that visits to non-SPCC facilities waste limited inspector time and program resources. In contrast, if SPCC inspectors find that the facility is subject to SPCC regulations, they can conduct a full inspection. Recognizing the constraints on their ability to identify and effectively target facilities for inspections, the regions also conduct outreach activities to encourage compliance. To inform owners of facilities that may be subject to SPCC regulations of their obligations, EPA regions we spoke with devote substantial time to outreach activities. For example, Region 5 officials told us that an estimated 75 percent of their time spent on SPCC activities is devoted to outreach and compliance assistance. These activities include, among other things, attending seminars and educating facility owners through regular mail, e-mails, and calls about SPCC regulations. EPA officials hope that educating facility owners will lead to more overall compliance, giving facility owners a chance to comply with SPCC regulations on their own initiative rather than waiting until they might be inspected and found out of compliance. EPA is launching a pilot SPCC/FRP national database that it hopes will be more useful to regional managers in implementing the SPCC program than existing data sources. The pilot database is essentially an expansion of the database that EPA has maintained on about 4,100 FRP facilities. EPA officials hope that a central database will make it easier to gather more consistent facility information across regions and provide for more efficient use of the regions’ time and resources. The expanded database will include information from the following sources: The Integrated Compliance Information System (ICIS). Since 2005, EPA has required regional SPCC inspectors to record their inspections in ICIS, a central database designed to track the number of inspection and enforcement cases across several EPA programs. However, EPA officials told us that ICIS is not particularly useful to program managers in implementing the SPCC program. For example, ICIS records the initial investigation and enforcement outcomes of investigation cases, but it does not allow the user to track a facility’s progress in coming into compliance after violations have been found. As a result, the regions’ use of ICIS is largely limited to checking facilities’ inspection histories when considering them for inspection, to determine if the facility has been inspected previously and if it has a history of violations. Regional databases. Most regional offices also maintain their own program databases, in addition to ICIS, to track open SPCC cases and the number of inspections. However, EPA officials told us that without a way to know when an SPCC facility opens, closes, or makes changes, facility information kept in these regional databases can quickly become out of date after a case is closed. The pilot SPCC/FRP national database is intended to provide regional personnel with a nationally consistent platform to track facility status and inspection information. The database fields include the facility’s name, relevant program identification numbers, status, and location, including its distance from navigable waters and whether it is subject to either SPCC or FRP regulations. The database can sort information by these fields to generate more descriptive reports than is possible with existing data sources. As of October 2007, EPA had entered information on about 5,000 previously inspected SPCC facilities going back to 1987. The pilot national database will also allow program managers to track open SPCC cases as they progress. According to an EPA official, in December, 2007 the pilot SPCC/FRP national database was made available to regional managers for their review and comment. EPA noted that this data consolidation effort is ongoing and EPA officials have a tentative time frame of the end of 2008 to implement the database nationally to the regions. Regardless of timing, however, EPA officials acknowledge that this database will not help the agency to further identify all SPCC-regulated facilities. However, EPA intends to further evaluate how the database, and other program activities, can more effectively target facilities for inspection. EPA’s limited data make it difficult for the agency to determine the extent to which the SPCC program is achieving its goals. While EPA can determine whether a facility is complying with SPCC requirements by inspecting it, the agency inspects only a small portion of the total universe of SPCC facilities—less than 1 percent of the estimated more than half a million facilities per year. Consequently, the agency is limited in evaluating the success of the SPCC program. Without data on the full regulated community, EPA is unable to assess the program’s effectiveness in preventing oil spills from the vast majority of the facilities subject to the SPCC rule. Even if EPA had the necessary data, it does not have the appropriate performance measures in place to examine the extent to which the program is meeting its goals. Currently, to evaluate the SPCC and FRP programs, EPA uses two performance measures that focus on the level of facility compliance: “the percent of inspected SPCC facilities in compliance with the regulations at the time of inspection” and “the percent of inspected FRP facilities in compliance with FRP regulations at the time of inspection.” These measures were developed for SPCC as part of a 2005 OMB program review. According to EPA officials, both EPA and OMB recognized at the time that these measures on facility compliance do not fully capture the effectiveness of the overall program in preventing oil spills from regulated facilities into or upon U.S. navigable waters and adjoining shorelines, and that improved measures should be developed. EPA officials expressed concern about the appropriateness of using performance measures that are focused on facility compliance levels. First, according to these officials, regional program managers try to identify and target facilities that present a large spill risk in an effort to ensure spill prevention and therefore should not expect to see high rates of facility compliance upon inspection because of the nature of these facilities. Second, they told us that program managers are held accountable for achieving the goals set in these “percent compliance” measures in their performance reviews. Consequently, these officials are concerned that the goal of compliance at the time of inspection might steer regional offices away from inspecting the facilities that they believe pose the highest risk of noncompliance in order to improve their compliance rates. As a result of concern over the current program measures, EPA initiated a joint OEM/regional workgroup to develop revised measures for the SPCC and FRP programs. OEM has committed to OMB to begin implementation of the new program measures in fiscal year 2009. The six state tank programs we reviewed suggest a number of potential options for improving the implementation of the SPCC program. Like the SPCC program, the state programs we reviewed generally have the goal of preventing and controlling oil spills. However, unlike the SPCC program, the state programs all collect information on the status and location of all tanks subject to their state regulations, according to state officials. Furthermore, the six states use this information to periodically inspect all of their regulated facilities. The states’ collection of tank data could benefit the SPCC program, according to state officials, noting that better coordination with the states could help identify and target SPCC facilities for inspection and inform owners of SPCC-regulated facilities about storage tank requirements. The six states we contacted—Florida, Minnesota, Missouri, New Jersey, New Mexico, and Virginia—have oil tank requirements and inspection processes that differ in some respects from each others’ and from EPA’s. Specifically, the type of regulated tanks or facilities may differ from those subject to the SPCC rule. Table 3 summarizes the number and types of tanks subject to regulation in the six states and key actions required by these regulations. While the six states have different requirements, they all collect data on their entire regulated universe rather than on only a limited portion of the total facilities, as EPA does. Except for Missouri, the states acquire this information by requiring tank owners to register their tanks and provide basic information on their facilities at the time they begin operations. The five states with a registration process require facility owners to notify the state of any changes to their facilities, including any changes in ownership, construction of new tanks, or alterations to existing tanks. Furthermore, officials from all five of these states said that inspectors check to ensure that they have current and accurate information on each facility at the time, or after, they conduct the inspection. Missouri does not use a registration system to identify facilities for inspection, but state officials told us that they obtain data on facilities by maintaining a strong relationship with tank installers and petroleum suppliers, and some of the facility owners voluntarily provide information to the state. According to a state official, Missouri does not need a registration system because the tank inspection program’s strong presence in the field allows it to inspect all of the state’s 5,500 regulated facilities every 6 months. The type of information collected through the registration process varies by state but can include the facility’s ownership, location, storage capacity, age, number of tanks, and the tanks’ construction, as well as the facility’s history, such as any past inspections, violations, enforcement actions, or reported discharges. According to state officials, the information they obtain enables them to implement and manage their storage tank programs effectively. In addition to requiring facilities to submit basic tank and facility information, New Jersey requires tank owners to develop and submit their plans for leak prevention and emergency response to the state for review prior to becoming operational. All of the states that we contacted compile facility information into central databases that they can use to inspect a facility for the first time or to follow up on a prior inspection. In addition, all of these states use their databases to inspect their entire universe of regulated facilities, although the frequency of these inspections varies by state, as table 3 shows. Officials from Minnesota and New Jersey also stated that databases that capture the full regulated universe play an important role in the success of their inspection programs and that implementation would be difficult without these data. However, because of different reporting requirements, states may not have information on the full universe of SPCC-regulated facilities that EPA needs. The extent to which EPA regions coordinate with the states in identifying, targeting, and inspecting aboveground storage tank facilities, and ensuring compliance, depends on the individual region. Some regions we contacted told us they proactively contact the states as well as other federal and local agencies for information, while other regions told us they have varied or limited contact with the state tank programs in their region. Region 8 officials told us that they have two staff members who focus on building relationships with local fire departments and other first responders to identify potential SPCC facilities and target them for inspection. They often work with first responders when a spill occurs, and may conduct an SPCC inspection after the immediate remediation efforts are completed. A Region 1 official credited that region’s success in identifying and targeting SPCC-regulated facilities for inspection largely to the region’s close work with state institutions and the U.S. Coast Guard. Region 3 has a formal agreement—known as the Performance Partnership Agreement—with Maryland, Pennsylvania, and Virginia to coordinate their regulatory program activities, including the aboveground storage tank programs. According to officials from both the EPA regional office and Virginia, EPA routinely asks the state for a list of aboveground oil storage tank facilities that may be of concern relating to the SPCC and FRP regulations. In addition, EPA notifies Virginia state officials before conducting inspections, issuing administrative orders, and initiating litigation against facilities in that state. Finally, EPA Region 3 and Virginia officials try to coordinate inspections of facilities of interest to both the SPCC and the state’s programs and in some cases conduct joint inspections, although these are limited because of the differences in the SPCC and state regulations. Region 5 officials told us that they often contact states in the region and have asked officials in these states for lists of facilities they recommend for inspections, invited those states with tank or oil programs to accompany them on inspections, and copy them on correspondence with facilities. Region 5 officials stated that they work more closely with states in the region that do not have programs similar to SPCC. Region 6 officials told us that they are in touch with the various state agencies in their region but relationships will vary and are dependent on the leadership and personnel of these agencies. Region 7 officials stated that they do not regularly communicate with the states in their region. From the state perspective, officials in Florida, Minnesota, Missouri, New Jersey, and New Mexico reported varying degrees of communication with their respective EPA regional officials on coordinating activities, such as identifying and targeting facilities for inspection and conducting inspections. According to these state officials, this can range from occasional discussions to no contact at all. For example, New Jersey officials stated that they are in contact with their counterparts in EPA Region 2 and share information on their regulated universes and are invited by the region to participate in certain inspections. However, although Region 5 officials stated that they often contact all the states in their regions, including inviting those with tank programs to accompany them on inspections, Minnesota officials stated that they have little or no communication with EPA Region 5 aboveground storage tank officials. They stated that they do not receive advance notification of when EPA Region 5 plans to conduct SPCC inspections in their state and often learn about an EPA inspection only after it takes place, when the region copies the state on any compliance correspondence with the facility. In addition, a Florida official stated that the EPA region does not contact the state program about its SPCC program activities in the state, such as when it conducts inspections or training. Officials in several states said that further contact between their offices and EPA regions’ SPCC programs could improve EPA’s identification and targeting of SPCC-regulated facilities because the states have more detailed data on their regulated community and have established relationships with the facility owners in their states. For example, a Missouri official said that further coordination between the SPCC program in Region 7 and Missouri’s inspection program could be useful to the SPCC program because the state maintains close ties with facility owners to be better aware of the regulated community. Although EPA regions conduct outreach activities to educate facility owners on their responsibilities under the SPCC regulations, officials in several of the states we contacted told us that these efforts needed improvement. Several of these officials stated that they find facility owners are confused about the relationship between SPCC regulations and state regulations. For example, Missouri officials told us that facility owners want to comply with both state and SPCC regulations but they often do not because the difference between the two types of regulations is often confusing. Given this confusion, according to state officials, coordinating federal and state outreach activities—such as educating facility owners about SPCC and state regulations through seminars or conferences—is important to provide the regulated community with more complete and comprehensive information. State officials told us that increased coordination by EPA regions with the states on outreach activities, such as educating facility owners on the SPCC program and state regulations, could benefit both the SPCC and state tank programs by making these efforts more comprehensive. For example, a Minnesota official told us that the state recently learned that EPA had held training sessions with facility owners in Minnesota after they had occurred and that the state would like EPA to contact them prior to any planned training for the regulated community so that information on state aboveground storage tank rules could be distributed at the same time. EPA Region 5 officials stated that the region has conducted workshops that included state oil pollution programs, such as Minnesota’s, as well as other local and federal partners. Recently, however, training sessions in Minnesota were limited to those requested by trade groups. State officials also noted that outreach efforts in their state programs have contributed to better compliance. According to state officials, working closely with facility owners maximizes compliance and minimizes the need for legal actions. For example, a Missouri official told us that the state program has between 10 and 50 active enforcement cases ongoing on any given day. However, he said the state has imposed penalties only five or six times over the last 20 years because working with facility owners helps to eliminate the need for formal penalties. Similarly, Florida tries to work collaboratively with facility owners to gain compliance. Florida’s program is relatively decentralized; the state contracts with the counties to conduct inspections. A Florida state official told us that county-level inspectors are well equipped to identify violators and use their relationships to gain compliance because they live in the same communities they are inspecting. Leaking aboveground storage tanks can contaminate soil and waterways and threaten human health and the environment before the leaks are identified and stopped. However, EPA has identified and inspected only a small portion of the more than 500,000 facilities it estimates are subject to the SPCC rule, and when it inspects these facilities, it often finds them out of compliance. EPA’s current method of identifying facilities subject to the SPCC rule—through referrals, the Yellow Pages, and Internet searches— does not allow the agency to use its limited resources effectively to identify facilities most at risk of leaking oil. Without more comprehensive data on the universe of facilities that are subject to the SPCC rule, EPA cannot employ a risk-based approach to target its SPCC inspections to those facilities that pose the greatest risks of oil spills into or upon U.S. navigable waters and adjoining shorelines. Similarly, incomplete information on the universe of SPCC facilities prevents EPA from determining whether and to what extent the SPCC program is achieving its goals. But even with the needed data, EPA will be unable to measure the program’s success unless and until it develops reliable performance goals. While EPA may have forgone developing such measures because the data for them were unavailable, effective program management requires that the two aspects—data and measures—be developed in tandem. EPA may have a number of options for filling this data gap. One such approach would be to initiate a facility registration program, similar to that of some states we contacted. While the details might vary, this approach would, in its basic form, require that facilities that meet the criteria of the SPCC rule report that fact to EPA, along with other basic facility and tank information. While this mechanism would likely involve some costs to both EPA and the individual facilities, it would also increase the agency’s knowledge of the SPCC universe and allow it to better target its inspection resources on the basis of the relative risks posed by the facilities, which may outweigh the increased costs. There may also be other options available to EPA to expand its knowledge of the SPCC universe at a lower cost and that may be worth the agency exploring. Greater coordination with states may also help EPA to fill its SPCC data gap. As noted, primarily through their registration processes, some states have what they consider to be very comprehensive data on the oil storage facilities that they regulate, including some that may be SPCC facilities. Either with or without a registration process or some other information- gathering mechanism, greater coordination with states that have inspection programs comparable to EPA’s SPCC program could help to expand EPA’s knowledge base on SPCC facilities and provide a more informed basis for targeting limited inspection resources. However, given the variation that we found in regional office-state interactions, without uniform guidance for EPA regional offices on how to better communicate and coordinate with states on SPCC-related issues, EPA may not be able to take full advantage of this opportunity to gain information that may be critical for achieving the SPCC program’s goals. To better identify and target SPCC facilities for inspection, we recommend that the Administrator of EPA direct the Office of Emergency Management to take the following two actions: analyze the costs and benefits of the options available to EPA for obtaining key data about the universe of SPCC-regulated facilities, including, among others, a tank registration program similar to those employed by some states, which would require tank owners to report to EPA, on a regular basis, facility information such as the number of facilities and tanks, their size, age, location, quality of construction, and methods of operation and in conjunction with states that have oil spill prevention programs, develop uniform guidance for EPA regional offices on how to better communicate and coordinate with those states on SPCC-related issues. In addition, to assess the effectiveness of the SPCC program, we recommend that the Administrator, EPA, direct the Office of Emergency Management to complete, in a timely manner, the development of performance measures and obtain the data needed to determine the extent to which the program is achieving its goals of preventing and controlling oil spills. GAO provided EPA with a draft of this report for its review and comment. The agency stated that it generally agreed with the recommendations in the report and that the report provided a good, comprehensive picture of a portion of the oil spill program implemented by EPA’s Office of Emergency Management. With regard to our recommendation that EPA finish developing performance measures and obtain the data needed to evaluate SPCC program effectiveness, the agency noted—as we acknowledge in the report—that EPA has already initiated work to develop such measures and that the feedback the report provides will help to further shape the agency’s actions in this regard. Beyond agreeing with our other two recommendations, EPA did not comment on them. EPA also provided technical comments on the draft report, which we have incorporated as appropriate. The full text of EPA’s comments is included as appendix III. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Administrator, EPA, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov If you or your staff have any questions about this report, please contact me at (202) 512-3841 or stephensonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To determine how the Environmental Protection Agency (EPA) regions implement the Spill Prevention, Control, and Countermeasure (SPCC) program, we spoke with EPA headquarters on the overall management of the program, including the organizational structure, formulation and implementation of the SPCC rule and amendments, training of staff on the rule, funds allocated to the program, enforcement policy, and headquarters’ interaction and coordination with the EPA regions that implement the program. To determine what data EPA officials have available to implement the SPCC program, we spoke with EPA region officials to determine the agency’s data sources for identifying facilities and targeting them for inspection, and for enforcing SPCC regulations; how the agency uses the data; and the data’s overall limitations. To obtain this information, we visited EPA Regions 3, 5, and 6 because they conducted the most inspections of all the EPA regions over a 3-year period and to achieve geographical diversity. We visited an SPCC facility in each of these regions with EPA officials to observe how SPCC inspectors conduct their work. To obtain information on both how the program is implemented and what data sources the agency uses, we conducted a survey of SPCC program officials in all 10 EPA regions. In this survey, we sought to determine, among other things, how the regions identify and target facilities to inspect, the number of inspections each region has conducted in recent years, how much training an SPCC inspector receives, and the number of those inspected facilities that complied with SPCC regulations and, for those that did not comply, the number and type of enforcement actions taken. On November 30, 2006, we e-mailed the survey with a cover letter to officials in the 10 regions that were primarily responsible for day-to-day management and implementation of SPCC requirements. We also issued an addendum to each region on December 5, 2006, when it was brought to our attention that two questions in the survey regarding the training of inspection staff posed some confusion. Completed surveys were received by December 18, 2006. To supplement the survey and to elaborate on survey responses, in addition to the three regions we visited, we followed up by telephone with four regions—1, 2, 7, and 8. The calls helped us obtain more specific examples of how EPA regions identify and target SPCC facilities for inspection. A copy of our survey used in this review is in appendix II. It includes the aggregate responses to the survey and summaries of open-ended questions from all 10 EPA regions, when appropriate. The practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, respondents may have difficulty in interpreting a particular question or may lack information necessary to provide a valid and reliable response. In order to minimize these errors, we conducted a pretest of the draft survey with two EPA regions—4 and 8—over the telephone. We made changes to the content and format of the survey after this review based on the feedback we received. To understand the nature of states’ aboveground oil storage tank programs and how they are implemented, to identify potential options that might be applied to EPA’s program, and to learn about any coordination between these states’ programs and EPA’s SPCC program, we first reviewed the Aboveground Storage Tank Guide, Vols. I and II, by the Thompson Publishing Group, which includes a comprehensive section on individual state aboveground storage tank regulations. We found that although many states regulate aboveground storage tanks in a piecemeal fashion through various state statutes, including adopted versions of uniform fire codes, such as the Uniform Fire Safety Standards, the International Fire Code, and the National Fire Protection Association’s code, some states have developed comprehensive regulatory programs. After our analysis of this information, we spoke with the Association of State and Territorial Solid Waste Management Officials (ASTSWMO) and other state officials who recommended we speak to several states that they considered to have well-run aboveground storage tank programs. We then selected our states based on these recommendations, as well as geographical considerations such as whether the states were in diverse areas of the United States. We also limited our selection to one state each for a particular EPA region. We then interviewed officials from aboveground oil storage tank inspection programs in six states—Florida, Minnesota, Missouri, New Jersey, New Mexico, and Virginia. We conducted this performance audit between August 2007 and April 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following information includes the aggregate responses and, when appropriate, summaries of answers to open-ended questions from our survey of 10 EPA regional offices on how the SPCC program is implemented and the data sources the agency uses. We also followed up with officials from several regional offices to clarify some of their survey responses. 1. Did your region document the number of facilities that were inspected for compliance with Spill Prevention, Control, and Countermeasures (SPCC) regulations in the following federal fiscal years? 2. Can your region separately account for facilities inspected for compliance with SPCC and with FRP? 0 SKIP to question 11. 3. How many facilities in your region were inspected for compliance with SPCC regulations in each of the following federal fiscal years? Please count each facility once in a given year, regardless of the number of times it was inspected in that year. 4. Of the total number of facilities inspected in each of the following federal fiscal years how many if any were in full compliance with all SPCC requirements at the time of inspection? 5. Of the total number of facilities inspected in each of the following federal fiscal years, how many were not in full compliance with all SPCC requirements at the time of inspection? 6. Of the total number of facilities that were not in full compliance at the time of inspection, how many were issued an Expedited Settlement Agreement (ESA)? For this question, please consider an ESA to be a mechanism used by EPA to address a facility’s compliance shortcomings with reduced fines. 7. Of the total number of facilities that were not in full compliance at the time of inspection, against how many did EPA apply traditional enforcement mechanisms (that is, taking legal action)? 8. Of the total number of facilities that were not in full compliance at the time of inspection in FY 2006, for how many facilities has EPA not determined the final enforcement action it will take? Government auditing standards require that GAO assess the accuracy of data we use in our reports. Your responses to the following questions will be used to help us correctly interpret the information you have provided in questions 3–8. 9. Are there circumstances in which an inspected facility would be counted more than once in your responses to questions 3–8? 1 Please explain these circumstances in the space below. 9 (3 elaborations) We received one “yes” response to this question, from Region 7. Region 7 said that an inspection could be counted twice if it was entered into its data system with different facility ID numbers. We do not know how common it estimates this mistake to be or if there is any systematic reason that this mistake would be made. We received three “no” response clarifications to this question, from Regions 1, 2, and 4. Region 2 stated that the term “inspections” for this survey is being interpreted as a Field Inspection, not SPCC Plan Reviews conducted without a Field Inspection and that this interpretation will reduce the amount of double counts for a facility. Region 4 stated that it conducted 168 inspections in fiscal year 2006 and that it counted 166 inspections total per the GAO survey instructions provided in Question 3, since two facilities were inspected twice in the same year. Region 1 stated “With exception, as Questions 4 and 5 are subsets of Question 3, and Questions 6 thru 8 are subsets of Question 5.” 10. Are there circumstances in which an inspected facility would not be counted at all in your responses to questions 3–8? 4 Please explain these circumstances in the space below. We received four “yes” response elaborations from this question, from Regions 1, 3, 7, and 10. We received no elaborations from “no” responses. EPA Region 1 clarified that legal actions were included in Question 8, rather than Question 7, per a follow-up conversation with Region 1 officials. These actions may be administrative orders or Clean Water Act Section 308 Information Requests. EPA Region 3 officials stated that in fiscal year 2006, they discontinued counting inspections that were conducted in conjunction with the Underground Injection Control program. This reduces their number of reported inspections. EPA Region 7 responded that an inspection would not be counted if it failed to enter it into the data system. Region 7 also stated that some inspections had not been counted because the inspector has not been able to complete the inspection reports due to extended family friendly leave. Region 10 clarified that facilities that were determined not to be subject to SPCC regulations were counted as an inspection, but were not counted in any other section of the report. In some cases, facilities that were found not to be subject may explain the difference between the number of inspections and those found to be in compliance or noncompliance. We would like information on the individuals available to conduct inspections of facilities for compliance with SPCC. We would like to know who is trained to conduct inspections and the types of inspections these individuals have performed in fiscal year 2006. We understand that that not all inspectors may have completed the 40-hour SPCC/FRP specific training. 11. For federal fiscal year 2006, did your region document the number of individuals who inspected at least one facility for compliance with SPCC regulations? 0 SKIP to question 20. 12. How many individuals inspected at least one facility for compliance with SPCC regulations in your region in FY 2006? 13. Of the individuals who inspected at least one facility for compliance with SPCC regulations in FY 2006, how many completed each of the following types of SPCC training? Completed only Basic Inspector/Health and Safety training Completed only 40-hour program specific SPCC/FRP training Completed both Basic Inspector/Health and Safety and 40-hour program specific training Completed only 8- hour or 12-hour program specific SPCC/FRP training………………………... Completed both Basic Inspector/Health and Safety training and 8- hour or 12-hour program specific SPCC/FRP training……………………………………….. Completed none of the Basic Inspector/Health and Safety training nor any program specific SPCC/FRP training (40-, 12- or 8-hour training sessions) Total (from question 12) 14. Of individuals who did not inspect at least one facility for compliance with SPCC regulations in FY 2006, how many completed each of the following types of SPCC training? Completed only Basic Inspector/Health and Safety training Completed only 40-hour program specific SPCC/FRP training Completed both Basic Inspector/Health and Safety and 40-hour program specific training Completed only 8- hour or12-hour program specific SPCC/FRP training………………………... Completed both Basic Inspector/Health and Safety training and 8- hour or 12-hour program specific SPCC/FRP training……………………………………….. Completed none of the Basic Inspector/Health and Safety training nor any program specific SPCC/FRP training (40-, 12- or 8-hour training sessions) 15. Of the total number of individuals who inspected at least one facility in FY 2006, how many are employees of each of the following organizations? The total should equal the number of individuals entered for question 12. Other (Please specify: _________________) Total (from question 12) 16. Of the total number of individuals who inspected at least one facility in FY 2006, how many conduct inspections for only SPCC/FRP regulations and how many conduct inspections for both SPCC/FRP and other environmental regulations? The total should equal the number of individuals entered for question 12. Conduct inspections for only SPCC/FRP regulations Conduct inspections for both SPCC/FRP and other environmental regulations Total (from question 12) 17. Of the total number of individuals who inspected at least one facility in FY 2006, how many spent the following fractions of their time on activities related to SPCC? The total should equal the number of individuals entered for question 12. In calculating your responses, please consider all SPCC-related activities, including planning for inspections, conducting outreach to facilities, visiting facilities, and documenting inspection results. Less than 25% of their time Between 25% and 50% of their time Between 50% and 75% of their time More than 75% of their time Total (from question 12) Government auditing standards require that GAO assess the accuracy of data we use in our reports. Your responses to the following questions will be used to help us correctly interpret the information you have provided in questions 12–17. 18. Are there circumstances in which an inspector might be double- counted in your responses to questions 12–17? 0 Please explain these circumstances in the space below. 19. Are there circumstances in which an inspector might be mistakenly excluded from your responses to questions 12–17? 2 Please explain these circumstances in the space below. We received “yes” responses and elaborations to this question from Region 3 and Region 7. Region 3 responded that “multimedia” inspectors who reside in the Office of Enforcement, Compliance, and Environmental Justice are not counted in the numbers presented here. They receive no compensation, nor are their inspections recorded by EPA headquarters. Region 7 responded that it did not include a contractor that conducted two multimedia inspections, which included SPCC, in answering Questions 13 and 14. Inspectors not involved in the SPCC program were not included in the response to Question 14. 20. Does your region use written criteria to select facilities for inspection? 6 Please send us a copy of these criteria. 4 SKIP to question 25. 21. In what year did your region develop these criteria? 22. How often does your region re-evaluate these criteria? Of the six regions that reported having written inspection criteria, four said that they evaluate their criteria at least annually (Regions 3, 4, 8, and 9). Region 6 stated that its criteria are evaluated regularly, as conditions warrant. Region 2 stated that evaluation of which facilities to target is an ongoing process, done informally among the three staff members involved. 23. What process did your region use to develop these criteria? Regions briefly described processes that involve consulting a number of sources: staff, states, SPCC coordinators, etc., to set the priorities for targeting facilities. Region 4 said that formal mechanisms for targeting SPCC facilities have only been in place in recent years, but informally, the mechanisms have been in places for longer. Region 6 described the criteria used in its geographic information system (GIS) selection system. 24. What data do these criteria require in order to be used to select facilities for inspection? The regions described a variety of data sources: National Response Center (NRC) spill reports, other spill data, previous SPCC inspection checklists, enforcement priorities, GIS mapping data, facility history, etc. Region 2 said that state data can be of some use, but they do not correspond exactly to SPCC data. 25. Does your region have a list of facilities that it planned to inspect in FY 2006? 8 Please submit a copy of this list. 2 SKIP to question 30. 26. How many facilities were on your region’s originally planned list in federal fiscal year 2006? Number of facilities: 718-818 27. How many of these facilities did your region actually inspect during federal fiscal year 2006? Number of facilities: 607 28. What are the stages of the planning process that your region uses to select facilities for inspection? Eight of 10 regions gave written responses to this question. Responses reflect a variety of priorities in targeting facilities, but some common priorities are present: need of the state and spill histories is mentioned by a few regions. 29. What sources of data or information does your region use at each of these stages? Eight of 10 regions gave written responses to this question. Regions mention a variety of data sources: state data, Internet sites, and spill data. The responses generally do not tie specific data with specific stages in the facility targeting process. 30. In your region’s decisions about which facilities to inspect for compliance with SPCC regulations, how important are each of the following criteria? Please check one response in each row. Region received news reports suggesting non- compliance at a facility Other (Please specify: ___________ ________) 31. Does your region have a database of the total number of facilities that are subject to compliance with SPCC regulations in your region? 10 SKIP to question 34. 32. What is the source of these data? Three regions gave written responses to this question. Regions 3, 4, and 6 responded, saying that they do not have data on the universe of SPCC-regulated facilities. 33. How accurately do these data capture the total number of facilities subject to SPCC regulations in your region? Regions 3 and 6 gave responses to these questions. Both regions clarified that their databases do not fully capture the universe of regulated facilities. Region 6 said that it has found the general accuracy of its database to be less than 50 percent. 34. In federal fiscal year 2006, did your region use oil spill data from the U.S. Coast Guard’s National Response Center to manage the SPCC program in your region? 1 SKIP to question 36. 35. How accurately do the NRC data capture the total number of facilities subject to SPCC regulations in your region? Nine out of 10 regions responded to this question. Regions listed the flaws of NRC data. NRC only includes spill incidence, rather than the SPCC universe, and it is not always possible to trace the spill to its source. 36. In federal fiscal year 2006, did your region use oil spill data from state databases to manage the SPCC program in your region? 2 Which states? (See Below) 10 SKIP to question 38. 37. How accurately do these state data capture the total number of facilities subject to SPCC regulations in your region? Region 9: California. Region 9 says that California data are “Better than NRC, but still very little.” Region 6: Texas and Oklahoma. Region 6 says that state databases “have not been designed for determining SPCC inventories. However, they may include locational attributes which help identify potential SPCC facilities.” 38. In federal fiscal year 2006, did your region use oil spill data from other sources to manage the SPCC program in your region? 3 What sources? 7 SKIP to question 40. 39. How accurately does this data from other sources capture the total number of facilities subject to SPCC regulations in your region? Regions 1, 2, and 10 were the only regions that used other data and responded to this question. These regions said that their other data sources were from states, and that these data sources do not capture the regulated facilities. They only track complaints or spills and not the regulated universe. 40. Can any of the spill data used in your region be broken out for particular industrial category in any particular year? 3 Which data? 7 SKIP to question 42. 41. Can any of the spill data used in your region be broken out for particular years? 6 Which data? We are interested in identifying the extent EPA regions cooperate with states on oil spill prevention-related activities and regulations. We plan to meet with officials in at least two states in order to describe how these states and EPA cooperate in evaluating and implementing SPCC requirements. 42. What is the contact information for oil spill prevention-related activities and regulations in each of the states in your region? Contact information provided by the regions is not included. 43. Please describe the relevant oil spill prevention-related activities and regulations in each of the states in your region. The answers to this open-ended question are not included. 44. Do the states in your region have a system to register facilities that are subject to oil spill regulations? A total of 16 states were reported by EPA regions as having a system to register facilities subject to oil spill regulations. 45. In what month and year did the last reorganization that effected SPCC functions in your region take place? Month:_________________ Year: _________________ The answers to this question are not included. 46. Please provide any additional comments you’d like us to consider in our review. The answers to this question are not included. 47. Please attach copies of each of the following when submitting your response to us: 1. List of facilities that the region inspected in fiscal year 2006, including the following information on each facility: Whether or not the facility was in full compliance with SPCC regulations at the time of inspection; Whether EPA issued the facility an ESA; Whether or not EPA has taken legal action against the facility; The amount of fines, if any, levied against this facility 2. Written criteria used to select facilities for SPCC inspections (see question 20) 3. List of facilities that region planned to inspect in fiscal year 2006 (see question 25) 4. Documentation to support answers in Section 5 regarding oil spill databases, such as spreadsheets or descriptions of databases in which these data may be housed 5. An annotated organizational chart of your region explaining where all SPCC-related staff are located, including (but not limited to) inspectors, enforcement, data, and legal staff The information received from the regions that GAO requested is not included. John B. Stephenson, (202) 512-3841, stephensonj@gao.gov. In addition to the individual named above, Vincent P. Price, Assistant Director; Kevin Bray; Mark Braza; Greg Carroll; Bernice H. Dawson; Mary Robison; and Carol Herrnstadt Shulman made key contributions to this report.
Oil leaks from aboveground tanks have contaminated soil and water, threatening human health and wildlife. To prevent damage from oil spills, the Environmental Protection Agency (EPA) issued the Spill Prevention, Control, and Countermeasure (SPCC) rule in 1973. EPA's 10 regions inspect oil storage facilities to ensure compliance with the rule. EPA estimates that about 571,000 facilities are subject to this rule. Some states also regulate oil storage tanks. GAO determined (1) how EPA regions implement the SPCC program, (2) the data EPA has to implement and evaluate the program, and (3) whether some states' tank programs suggest ways for EPA to improve its program. GAO surveyed all 10 EPA regions and interviewed officials in EPA and six states selected on the basis of experts' recommendations, among other criteria. EPA allows regional offices flexibility to implement the SPCC program according to their individual circumstances. These differences account, at least in part, for regional variations in the number of SPCC inspections. According to GAO's survey, during fiscal years 2004 through 2006, EPA regions conducted 3,359 SPCC inspections--less than 1 percent of EPA's estimate of SPCC facilities--ranging from 184 in Region 10 to 745 in Region 6. Furthermore, because of regional differences in the number of inspections and the enforcement mechanisms used, the number of SPCC enforcement actions also varied. While EPA allows regional flexibility, it has begun implementing SPCC policies and procedures to promote consistency in how the SPCC regulations are interpreted and enforced. EPA has information on only a portion of the facilities subject to the SPCC rule, hindering its ability to identify and effectively target facilities for inspection and enforcement, and to evaluate whether the program is achieving its goals. Because facilities subject to the SPCC rule do not have to report to EPA, the agency can only estimate the universe of SPCC-regulated facilities and must try to identify them through such means as oil spill data, state referrals, and Internet searches. Through inspections, EPA determines if the facility is subject to the rule. While inspections of known SPCC facilities are generally risk-based, the risk assessments exclude the large number of estimated SPCC facilities that have not yet been identified and that may pose more serious threats than those currently targeted for inspection. EPA is developing a national database to promote standard data collection across regions and expand the facility information available to regional managers. However, this database is limited to previously inspected facilities and will not enable EPA to identify SPCC facilities beyond those already known. Ultimately, incomplete information on which facilities are subject to the SPCC rule, and where and how often leaks may occur, prevents EPA from effectively targeting inspections to facilities that potentially pose the highest risks. Furthermore, EPA does not have performance measures to examine the program's effectiveness. EPA is developing additional measures, but without more complete data on the SPCC-regulated universe, these measures cannot gauge the program's accomplishments. The tank inspection programs of Florida, Minnesota, Missouri, New Jersey, New Mexico, and Virginia can provide EPA with insight on potential improvements to the SPCC program. For example, five of the six states use tank registration and reporting systems to collect data on oil storage facilities, giving them information on the universe of facilities subject to state regulations. These states can therefore inspect all their facilities or target those they believe present the highest risks of spills. By taking a similar approach, EPA would have more complete data for setting inspection priorities based on risk. Furthermore, because these states have detailed knowledge of their facilities, EPA could benefit from increased coordination with them, when, for example, it identifies facilities and targets inspections.
Health care in the United States is a highly decentralized system, with stakeholders that include not only the entire population as consumers of health care, but also all levels of government, health care providers such as medical centers and community hospitals, patient advocates, health professionals, major employers, nonprofit health organizations, insurance companies, commercial technology providers, and others. In this environment, clinical and other health- related information is stored in a complex collection of paper files, information systems, and organizations, but much of it continues to be stored and shared on paper. Successfully implementing health IT to replace paper and manual processes has been shown to support benefits in both cost savings and improved quality of care. For example, we reported to this committee in 2003 that a 1,951-bed teaching hospital stated that it had realized about $8.6 million in annual savings by replacing outpatient paper medical charts with electronic medical records. This hospital also reported saving more than $2.8 million annually by replacing its manual process for managing medical records with an electronic process to provide access to laboratory results and reports. Other technologies, such as bar coding of certain human drug and biological product labels, have also been shown to save money and reduce medical errors. Health care organizations reported that IT contributed other benefits, such as shorter hospital stays, faster communication of test results, improved management of chronic diseases, and improved accuracy in capturing charges associated with diagnostic and procedure codes. There is also potential benefit from improving and expanding existing health IT systems. We have reported that some hospitals are expanding their IT systems to support improvements in quality of care. In April 2007, we released a study on the processes used by eight hospitals to collect and submit data on their quality of care to HHS’s Centers for Medicare & Medicaid Services (CMS). Among the hospitals we visited, officials noted that having electronic records was an advantage for collecting the quality data because electronic records were more accessible and legible than paper records, and the electronic quality data could also be used for other purposes (such as reminders to physicians). Officials at each of the hospitals reported using the quality data to make specific changes in their internal procedures designed to improve care. However, hospital officials also reported several limitations in their existing IT systems that constrained the ability to support the collection of their quality data. For example, hospitals reported having a mix of paper and electronic systems, having data recorded only as unstructured narrative or other text, and having multiple systems within a single hospital that could not access each other’s data. Although it was expected to take several years, all the hospitals in our study were working to expand the scope and functionality of their IT systems. This example illustrates, among other things, that making health care information electronically available depends on interoperability—that is, the ability of two or more systems or components to exchange information and to use the information that has been exchanged. This capability is important because it allows patients’ electronic health information to move with them from provider to provider, regardless of where the information originated. If electronic health records conform to interoperability standards, they can be created, managed, and consulted by authorized clinicians and staff across more than one health care organization, thus providing patients and their caregivers the necessary information required for optimal care. (Paper-based health records—if available—also provide necessary information, but unlike electronic health records, do not provide automated decision support capabilities, such as alerts about a particular patient’s health, or other advantages of automation.) Interoperability may be achieved at different levels (see fig. 1). For example, at the highest level, electronic data are computable (that is, in a format that a computer can understand and act on to, for example, provide alerts to clinicians on drug allergies). At a lower level, electronic data are structured and viewable, but not computable. The value of data at this level is that they are structured so that data of interest to users are easier to find. At still a lower level, electronic data are unstructured and viewable, but not computable. With unstructured electronic data, a user would have to find needed or relevant information by searching uncategorized data. It is important to note that not all data require the same level of interoperability. For example, computable pharmacy and drug allergy data would allow automated alerts to help medical personnel avoid administering inappropriate drugs. On the other hand, for such narrative data as clinical notes, unstructured, viewable data may be sufficient. Achieving even a minimal level of electronic interoperability would potentially make relevant information available to clinicians. Any level of interoperability depends on the use of agreed-upon standards to ensure that information can be shared and used. In the health IT field, standards may govern areas ranging from technical issues, such as file types and interchange systems, to content issues, such as medical terminology. ● For example, vocabulary standards provide common definitions and codes for medical terms and determine how information will be documented for diagnoses and procedures. These standards are intended to lead to consistent descriptions of a patient’s medical condition by all practitioners. The use of common terminology helps in the clinical care delivery process, enables consistent data analysis from organization to organization, and facilitates transmission of information. Without such standards, the terms used to describe the same diagnoses and procedures may vary (the condition known as hepatitis, for example, may be described as a liver inflammation). The use of different terms to indicate the same condition or treatment complicates retrieval and reduces the reliability and consistency of data. ● Another example is messaging standards, which establish the order and sequence of data during transmission and provide for the uniform and predictable electronic exchange of data. These standards dictate the segments in a specific medical transmission. For example, they might require the first segment to include the patient’s name, hospital number, and birth date. A series of subsequent segments might transmit the results of a complete blood count, dictating one result (e.g., iron content) per segment. Messaging standards can be adopted to enable intelligible communication between organizations via the Internet or some other communications pathway. Without them, the interoperability of health IT systems may be limited, reducing the data that can be shared. Developing interoperability standards requires the participation of the relevant stakeholders who will be sharing information. In the case of health IT, stakeholders include both the public and private sectors. The public health system is made up of the federal, state, tribal, and local agencies that may deliver health care services to the population and monitor its health. Private health system participants include hospitals, physicians, pharmacies, nursing homes, and other organizations that deliver health care services to individual patients, as well as multiple vendors that provide health IT solutions. Widespread adoption of health IT has the potential to improve the efficiency and quality of health care. However, transitioning to this capability is a challenging endeavor that requires attention to many important considerations. Among these are mechanisms to establish clearly defined health IT standards that are agreed upon by all important stakeholders, comprehensive planning grounded in results-oriented milestones and measures, and an approach to privacy protection that encourages acceptance and adoption of electronic health records. Attempting to expand the use of health IT without fully addressing these issues would put at risk the ultimate goal of achieving more effective health care. The need for health care standards has been broadly recognized for a number of years. In previous work, we identified lessons learned by U.S. agencies and by other countries from their experiences. Among other lessons, they reported the need to define and adopt common standards and terminology to achieve data quality and consistency, system interoperability, and information protection. In May 2003, we reported that federal agencies recognized the need for health care standards and were making efforts to strengthen and increase their use. However, while they had made progress in defining standards, they had not met challenges in identifying and implementing standards necessary to support interoperability across the health care sector. We stated that until these challenges were addressed, agencies risked promulgating piecemeal and disparate systems unable to exchange data with each other when needed. We recommended that the Secretary of HHS define activities for ensuring that the various standards-setting organizations coordinate their efforts and reach further consensus on the definition and use of standards; establish milestones for defining and implementing standards; and create a mechanism to monitor the implementation of standards through the health care industry. HHS implemented this recommendation through the activities of the Office of the National Coordinator for Health Information Technology (established within HHS in April 2004). Through the Office of the National Coordinator, HHS designated three primary organizations, made up of stakeholders from both the public and private health care sectors, to play major roles in identifying and implementing standards and expanding the implementation of health IT: ● The American Health Information Community (now known as the National eHealth Collaborative) was created by the Secretary of HHS to make recommendations on how to accelerate the development and adoption of health IT, including advancing interoperability, identifying health IT standards, advancing nationwide health information exchange, and protecting personal health information. Created in September 2005 as a federal advisory commission, the organization recently became a nonprofit membership organization. It includes representatives from both the public and private sectors, including high-level officials of VA and other federal and state agencies, as well as health systems, payers, health professionals, medical centers, community hospitals, patient advocates, major employers, nonprofit health organizations, commercial technology providers, and others. Among other things, the organization has identified health care areas of high priority and developed “use cases” for these areas (use cases are descriptions of events or scenarios, such as Public Health Case Reporting, that provide the context in which standards would be applicable, detailing what needs to be done to achieve a specific mission or goal). ● The Healthcare Information Technology Standards Panel (HITSP), sponsored by the American National Standards Institute and funded by the Office of the National Coordinator, was established in October 2005 as a public-private partnership to identify competing standards for the use cases developed by the American Health Information Community and to “harmonize” the standards. As of March 2008, nearly 400 organizations representing consumers, healthcare providers, public health agencies, government agencies, standards developing organizations, and other stakeholders were participating in the panel and its committees. The panel also develops the interoperability specifications that are needed for implementing the standards. In collaboration with the National Institute for Standards and Technology, HITSP selected initial standards to address, among other things, requirements for message and document formats and for technical networking. Federal agencies that administer or sponsor federal health programs are now required to implement these standards, in accordance with an August 2006 Executive Order. ● The Certification Commission for Healthcare Information Technology is an independent, nonprofit organization that certifies health IT products, such as electronic health records systems. HHS entered into a contract with the commission in October 2005 to develop and evaluate the certification criteria and inspection process for electronic health records. HHS describes certification as the process by which vendors’ health IT systems are established to meet interoperability standards. The certification criteria defined by the commission incorporate the interoperability standards and specifications defined by HITSP. The results of this effort are intended to help encourage health care providers throughout the nation to implement electronic health records by giving them assurance that the systems will provide needed capabilities (including ensuring security and confidentiality) and that the electronic records will work with other systems without reprogramming. The interconnected work of these organizations to identify and promote the implementation of standards is important to the overall effort to advance the use of interoperable health IT. For example, according to HHS, the HITSP standards are incorporated into the National Coordinator’s ongoing initiative to enable health care entities—such as providers, hospitals, and clinical labs—to exchange electronic health information on a nationwide basis. Under this initiative, HHS awarded contracts to nine regional and state health information exchanges as part of its efforts to provide prototypes of nationwide networks of health information exchanges. Such exchanges are intended to eventually form a “network of networks” that is to produce the envisioned Nationwide Health Information Network (NHIN). According to HHS, the department planned to demonstrate the experiences and lessons learned from this work in December 2008, including defining specifications based upon the work of HITSP and standards development organizations to facilitate interoperable data exchange among the participants, testing interoperability against these specifications, and developing trust agreements among participants to protect the information exchanged. HHS plans to place the nationwide health information exchange specifications defined by the participating organizations, as well as related testing materials, in the public domain, so that they can be used by other health information exchange organizations to guide their efforts to adopt interoperable health IT. The products of the federal standards initiatives are also being used by DOD and VA in their ongoing efforts to achieve the seamless exchange of health information on military personnel and veterans. The two departments have committed to the goal of adopting applicable current and emerging HITSP standards. According to department officials, DOD is also taking steps to ensure compliance with standards through certification. To ensure that the electronic health records produced by the department’s modernized health information system, AHLTA, are compliant with standards, it is arranging for certification through the Certification Commission for Healthcare Information Technology. Both departments are also participating in the National Coordinator’s standards initiatives. The involvement of the departments in these activities is an important mechanism for aligning their electronic health records with emerging federal standards. Federal efforts to implement health IT standards are ongoing and some progress has been made. However, until agencies are able to demonstrate interoperable health information exchange between stakeholders on a broader level, the overall effectiveness of their efforts will remain unclear. In this regard, continued work on standards initiatives will remain essential for extending the use of health IT and fully achieving its potential benefits, particularly as both information technology and medicine advance. Using interoperable health IT to help improve the efficiency and quality of health care is a complex goal that involves a range of stakeholders and numerous activities taking place over an expanse of time; in view of this complexity, it is important to develop comprehensive plans that are grounded in results-oriented milestones and performance measures. Without comprehensive plans, it is difficult to coordinate the many activities under way and integrate their outcomes. Milestones and performance measures allow the results of the activities to be monitored and assessed, so that corrective action can be taken if needed. Since it was established in 2004, the Office of the National Coordinator has pursued a number of health IT initiatives (some of which we described above), aimed at the expansion of electronic health records, identification of interoperability standards, advancement of nationwide health information exchange, and protection of personal health information. It also developed a framework for strategic action for achieving an interoperable national infrastructure for health IT, which was released in 2004. We have noted accomplishments resulting from these various initiatives, but we also observed that the strategic framework did not include the detailed plans, milestones, and performance measures needed to ensure that the department integrated the outcomes of its various health IT initiatives and met its overall goals. Given the many activities to be coordinated and the many stakeholders involved, we recommended in May 2005 that HHS define a national strategy for health IT that would include the necessary detailed plans, milestones, and performance measures, which are essential to help ensure progress toward the President’s goal for most Americans to have access to interoperable electronic health records by 2014. The department agreed with our recommendation, and in June 2008 it released a four-year strategic plan. If the plan’s milestones and measures for achieving an interoperable nationwide infrastructure for health IT are appropriate and properly implemented, the plan could help ensure that HHS’s various health IT initiatives are integrated and provide a useful roadmap to support the goal of widespread adoption of interoperable electronic health records. Across our health IT work at HHS and elsewhere, we have seen other instances in which planning activities have not been sufficiently comprehensive. An example is the experience of DOD and VA, which have faced considerable challenges in project planning and management in the course of their work on the seamless exchange of electronic health information. As far back as 2001 and 2002, we noted management weaknesses, such as inadequate accountability and poor planning and oversight, and recommended that the departments apply principles of sound project management. The departments’ efforts to meet the recent requirements of the National Defense Authorization Act for Fiscal Year 2008 provide additional examples of such challenges, raising concerns regarding their ability to meet the September 2009 deadline for developing and implementing interoperable electronic health record systems or capabilities. In July 2008, we identified steps that the departments had taken to establish an interagency program office and implementation plan, as required. According to the departments, they intended the program office to play a crucial role in accelerating efforts to achieve electronic health records and capabilities that allow for full interoperability, and they had appointed an Acting Director from DOD and an Acting Deputy Director from VA. According to the Acting Director, the departments also have detailed staff and provided temporary space and equipment to a transition team. However, the newly established program office was not expected to be fully operational until the end of 2008—allowing the departments at most 9 months to meet the deadline for full interoperability. Further, we reported other planning and management weaknesses. For example, the departments developed a DOD/VA Information Interoperability Plan in September 2008, which is intended to address interoperability issues and define tasks required to guide the development and implementation of an interoperable electronic health record capability. Although the plan included milestones and schedules, it was lacking many milestones for completing the activities defined in the plan. Accordingly, we recommended that the departments give priority to fully establishing the interagency program office and finalizing the implementation plan. Without an effective plan and a program office to ensure its implementation, the risk is increased that the two departments will not be able to meet the September 2009 deadline. As the use of electronic health information exchange increases, so does the need to protect personal health information from inappropriate disclosure. The capacity of health information exchange organizations to store and manage a large amount of electronic health information increases the risk that a breach in security could expose the personal health information of numerous individuals. Addressing and mitigating this risk is essential to encourage public acceptance of the increased use of health IT and electronic medical records. Recognizing the importance of privacy protection, HHS included security and privacy measures in its 2004 framework for strategic action, and in September 2005, it awarded a contract to the Health Information Security and Privacy Collaboration as part of its efforts to provide a nationwide synthesis of information to inform privacy and security policymaking at federal, state, and local levels. The collaboration selected 33 states and Puerto Rico as locations in which to perform assessments of organization-level privacy- and security-related policies and practices that affect interoperable electronic health information exchange and their bases, including laws and regulations. As a result of this work, HHS developed and made available to the public a toolkit to guide health information exchange organizations in conducting assessments of business practices, policies, and state laws that govern the privacy and security of health information exchange. However, we reported in January 2007 that HHS initiated these and other important privacy-related efforts without first defining an overall approach for protecting privacy. In our report, we identified key privacy principles and challenges to protecting electronic personal health information. ● Examples of principles that health IT programs and applications need to address include the uses and disclosures principle, which provides limits to the circumstances in which an individual’s protected heath information may be used or disclosed, and the access principle, which establishes individuals’ rights to review and obtain a copy of their protected health information in certain circumstances. ● Key challenges include understanding and resolving legal and policy issues (for example, those related to variations in states’ privacy laws), ensuring that only the minimum amount of information necessary is disclosed to only those entities authorized to receive the information, ensuring individuals’ rights to request access and amendments to their own health information, and implementing adequate security measures for protecting health information. We recommended that HHS define and implement an overall privacy approach that identifies milestones for integrating the outcomes of its privacy-related initiatives, ensures that key privacy principles are fully addressed, and addresses challenges associated with the nationwide exchange of health information. In September 2008, we reported that HHS had begun to establish an overall approach for protecting the privacy of personal electronic health information—for example, it had identified milestones and an entity responsible for integrating the outcomes of its many privacy- related initiatives. Further, the federal health IT strategic plan released in June 2008 includes privacy and security objectives along with strategies and target dates for achieving them. However, in our view, more actions are needed. Specifically, within its approach, the department had not defined a process to ensure that the key privacy principles and challenges we had identified were fully and adequately addressed. This process should include, for example, steps for ensuring that all stakeholders’ contributions to defining privacy-related activities are appropriately considered and that individual inputs to the privacy framework are effectively assessed and prioritized to achieve comprehensive coverage of all key privacy principles and challenges. Without such a process, stakeholders may lack the overall policies and guidance needed to assist them in their efforts to ensure that privacy protection measures are consistently built into health IT programs and applications. Moreover, the department may miss an opportunity to establish the high degree of public confidence and trust needed to help ensure the success of a nationwide health information network. To address these concerns, we recommended in our September report that HHS include in its overall privacy approach a process for ensuring that key privacy principles and challenges are completely and adequately addressed. Lacking an overall approach for protecting the privacy of personal electronic health information, there is reduced assurance that privacy protection measures will be consistently built into health IT programs and applications. Without such assurance, public acceptance of health IT may be at risk. In closing, Mr. Chairman, many important steps have been taken, but more is needed before we can make a successful transition to a nationwide health IT capability and take full advantage of potential improvements in care and efficiency that this could enable. It is important to have structures and mechanisms to build, maintain, and expand a robust foundation of health IT standards that are agreed upon by all important stakeholders. Further, given the complexity of the activities required to implement health IT and the large number of stakeholders, completing and implementing comprehensive planning activities are also key to ensuring program success. Finally, an overall privacy approach that ensures public confidence and trust is essential to successfully promoting the use and acceptance of health IT. Without further action taken to address these areas of concern, opportunities to achieve greater efficiencies and improvements in the quality of the nation’s health care may not be realized. This concludes my statement. I would be pleased to answer any questions that you or other Members of the Committee may have. If you should have any questions about this statement, please contact me at (202) 512-6304 or by e-mail at melvinv@gao.gov. Other individuals who made key contributions to this statement are Barbara S. Collier, Heather A. Collins, Amanda C. Gill, Linda T. Kohn, Rebecca E. LaPaze, and Teresa F. Tucker. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
As GAO and others have reported, the use of information technology (IT) has enormous potential to help improve the quality of health care and is important for improving the performance of the U.S. health care system. Given its role in providing health care, the federal government has been urged to take a leadership role to improve the quality and effectiveness of health care, and it has been working to promote the nationwide use of health IT for a number of years. However, achieving widespread adoption and implementation of health IT has proven challenging, and the best way to accomplish this transition remains subject to much debate. At the committee's request, this testimony discusses important issues identified by GAO's work that have broad relevance to the successful implementation of health IT to improve the quality of health care. To develop this testimony, GAO relied largely on its previous work on federal health IT activities. Health IT has the potential to help improve the efficiency and quality of health care, but achieving the transition to a nationwide health IT capability is an inherently complex endeavor. A successful transition will require, among other things, addressing the following issues: (1) Establishing a foundation of clearly defined health IT standards that are agreed upon by all important stakeholders. Developing, coordinating, and agreeing on standards are crucial for allowing health IT systems to work together and to provide the right people access to the information they need: for example, technology standards must be agreed on (such as file types and interchange systems), and a host of content issues must also be addressed (one example is the need for consistent medical terminology). Although important steps have been taken, additional effort is needed to define, adopt, and implement such standards to promote data quality and consistency, system interoperability (that is, the ability of automated systems to share and use information), and information protection. (2) Defining comprehensive plans that are grounded in results-oriented milestones and measures. Using interoperable health IT to improve the quality and efficiency of health care is a complex goal that involves a range of stakeholders, various technologies, and numerous activities taking place over an expanse of time, and it is important that these activities be guided by comprehensive plans that include milestones and performance measures. Without such plans, it will be difficult to ensure that the many activities are coordinated, their results monitored, and their outcomes most effectively integrated. (3) Implementing an approach to protection of personal privacy that encourages public acceptance of health IT. A robust approach to privacy protection is essential to establish the high degree of public confidence and trust needed to encourage widespread adoption of health IT and particularly electronic medical records. Health IT programs and applications need to address key privacy principles (for example, the access principle, which establishes the right of individuals to review certain personal health information). At the same time, they need to overcome key challenges (for example, those related to variations in states' privacy laws). Unless these principles and challenges are fully and adequately addressed, there is reduced assurance that privacy protection measures will be consistently built into health IT programs and applications, and public acceptance of health IT may be put at risk.
CERCLA requires EPA to compile a list of contaminated and potentially contaminated federal facilities. This list, known as the Federal Agency Hazardous Waste Compliance Docket (docket), is based on information that agencies are required to report to EPA. EPA compiled the first docket in 1988 and, under CERCLA, EPA is to publish a list of any new sites added to the docket in the Federal Register every 6 months. Under section 120(c) of CERCLA, EPA is to update the docket after receiving and reviewing notices from federal agencies concerning the generation, transportation, treatment, storage, or disposal of hazardous wastes or release of hazardous substances. After a site is listed on the docket, CERCLA requires EPA to take steps to ensure that a preliminary assessment is conducted. EPA has established 18 months as a reasonable time frame for agencies to complete the preliminary assessment. After the agency conducts the preliminary assessment, EPA reviews it to determine whether the information is sufficient to assess the likelihood of a hazardous substance release, a contamination pathway, and potential receptors. EPA may determine that the site does not pose a significant threat and requires no further action. If it determines that further investigation is needed, EPA may request that the agency conduct a site inspection to gather more detailed information. If, on the basis of the site inspection, EPA determines that hazardous substances, pollutants, or contaminants have been released at the site, EPA will use the information from the preliminary assessment and site inspection to calculate and document a site’s preliminary Hazard Ranking System (HRS) score, which indicates a site’s relative threat to human health and the environment based on potential pathways of contamination. Sites with an HRS score of 28.50 or greater become eligible for listing on the National Priorities List (NPL), a list that includes some of the nation’s most seriously contaminated sites. Based on the risk a site poses, EPA may place the site on the NPL. According to an EPA official, 158 federal sites are on the NPL, as of September 2015. Once a site is on the NPL, EPA is to oversee the cleanup. As part of its oversight responsibility, EPA works with the responsible federal agency to evaluate the nature and extent of contamination at a site. The agency must then enter into an interagency agreement with EPA that includes: (1) a review of remedial alternatives and the selection of the remedy; (2) schedules for completion of each remedy; and (3) arrangements for the long-term operation and maintenance of the site. According to EPA, the agreements also provide a process for EPA and the federal agency to resolve any disagreements related to implementing the cleanup remedy, with EPA being the final arbiter of such disputes. Once the agency and EPA agree on a cleanup remedy, the agency implements the remedy at the site. Afterwards, the agency must conduct long-term monitoring to ensure the remedy remains protective of human health and the environment. For federal sites not included on the NPL, CERCLA provides that state cleanup and enforcement laws apply, and most states have their own cleanup programs to address hazardous waste sites. USDA, Interior, DOD, and DOE have identified thousands of contaminated and potentially contaminated sites on land they manage, but there is not a complete inventory of sites, in particular, for abandoned mines. We found in our January 2015 report that there were at least 1,491 contaminated sites on land managed by USDA. These sites include 1,422 Forest Service sites, which are primarily abandoned mines; 2 Animal and Plant Health Inspection Service (APHIS) sites; 3 Agricultural Research Service (ARS) sites; 61 former grain storage sites once managed by the Commodity Credit Corporation (CCC); and 3 foreclosure properties belonging to the Farm Service Agency (FSA). In addition to sites with confirmed contamination, we found that USDA agencies have also identified some potentially contaminated sites. ARS had identified 3 sites that are potentially contaminated. Forest Service regions maintain inventories of potentially contaminated sites that include landfills, shooting ranges, and cattle dip vats, but there was no centralized database of these sites and no plans or procedures for developing one. These various inventories did not provide a complete picture of the extent of USDA’s potentially contaminated sites. For example, there were an unknown number of potentially contaminated former grain storage sites in the 29 states where the CCC previously used carbon tetrachloride. This number was unknown because the CCC relies on the states to notify them of potential contamination, and 25 of the 29 states had not yet reported whether there was suspected contamination at their former CCC grain storage sites. The Forest Service also deals with various other types of hazardous waste sites, such as methamphetamine laboratories, roadside spills, and waste dumps. Forest Service officials said that, since these types of sites may involve illegal activities and are, therefore, not routinely reported, it is not possible to develop a comprehensive inventory of these types of sites. In addition, in January 2015, we reported that the Forest Service had not developed a complete, consistent, or usable inventory of abandoned mines and had no plans and procedures for developing such an inventory because, according to Forest Service officials, they did not have the resources to complete a comprehensive inventory of all potentially contaminated abandoned mines on the agency’s lands. The Forest Service estimated that there were from 27,000 to 39,000 abandoned mines on their lands—approximately 20 percent of which may pose some level of risk to human health or the environment, based on the professional knowledge and experience of agency staff. Such risks may include chemicals and explosives, acid mine drainage, and heavy metal contamination in mine waste rock. However, we concluded that because the Forest Service did not have a complete inventory of abandoned mine sites, the actual number of abandoned mines on National Forest System (NFS) lands was unknown. According to a USDA official, USDA first attempted to create a national inventory of mines on NFS lands in 2003. Then, in 2008, the Forest Service established the Abandoned Mine Lands (AML) database to aggregate all available data on abandoned mines on NFS lands. The AML database drew data on pending abandoned mine sites from the 2003 database and Forest Service regional inventories, as well as from the U.S. Geological Survey and various other federal, state, and local databases. USDA officials said that, once the AML database was established, the purpose of the earlier database shifted away from maintaining an AML inventory to tracking sites that entered into the CERCLA process. However, as we reported in January 2015, the AML database has a number of shortcomings. For example, the data migration from multiple inventories led to data redundancy issues, such as some mine sites being listed multiple times under the same or different names. In addition, USDA officials told us that there was a lot of variation in the accuracy and completeness of the data on these mine sites, but a quality assurance review had not yet been performed. One Forest Service official said that, because of these problems, the data in the AML database were unusable for purposes of compiling a complete and accurate inventory of abandoned mines. In 2012, the Forest Service tried to obtain the agency resources necessary to clean up the database. Even though the Forest Service rated this project as “critical,” the project did not receive any resources because other projects were deemed more important, according to a Forest Service official. Similarly, in our January 2015 report, we found several problems with the Forest Service’s regional abandoned mine inventories. First, some regional inventories were incomplete. For example, officials in Forest Service Region 10, which is composed solely of the State of Alaska, said they believed there may be some abandoned mines scattered throughout Tongass and Chugach National Forests that had not yet been inventoried. They said that Forest Service Region 10 did not have enough staff to assess all abandoned mines across such a large area. Second, several Forest Service regional inventories contained inaccurate data. Third, the Forest Service’s regional offices maintained their inventories differently. Some regional offices maintained their own inventories of potentially contaminated sites, whereas other regional offices utilized state or local agencies’ inventories. Finally, the type of data on abandoned mines varied from region to region, making it difficult to consolidate into a coherent national database. Some regional offices tracked mines at the site level, some by their features—such as mine shafts, pits, ore piles, or machinery—and some used both approaches. For example, officials in Forest Service Region 3 told us that they had identified over 3,000 abandoned mine sites, and officials in Forest Service Region 4 told us that they had identified approximately 2,000 mine features but had not yet consolidated these features into mine sites. We reported in January 2015 that, without a comprehensive inventory of such sites or plans and procedures for developing one, USDA and the Forest Service will not have reasonable assurance that they are prioritizing and addressing the sites that pose the greatest risk to human health or the environment. Consequently, in January 2015, we recommended that the Secretary of Agriculture direct the heads of the department’s land management agencies to develop plans and procedures for completing their inventories of potentially contaminated sites. USDA disagreed with our recommendation and stated that it had a centralized inventory and that this inventory was in a transition phase as a result of reduced funding levels. USDA also stated that it had taken a number of actions to manage its inventory in a more cost-effective manner, reduce operating costs, and eliminate data collection redundancies across the USDA agencies. Subsequently, in a June 2015 letter to GAO, USDA described three corrective actions that the department planned to take in response to our recommendation. We believe that these actions are needed. We found in our January 2015 report that Interior had identified 4,722 sites with confirmed or likely contamination. These include 4,098 Bureau of Land Management (BLM) sites that the agency reported had confirmed contamination or required further investigation to determine whether remediation was warranted. The majority of these sites were abandoned mines. Interior’s National Park Service (NPS) identified 417 sites with likely or confirmed contamination; the Bureau of Indian Affairs, 160 sites; the Fish and Wildlife Service, 32 sites; and the Bureau of Reclamation, 15 sites. These Interior agencies identified additional locations of concern that would require verification or initial assessment to determine if there were environmental hazards at the sites. Officials we interviewed from Interior agencies, except BLM, told us that they believed they had identified all sites with likely environmental contamination. We also found that the total number of sites BLM may potentially have to address is unknown, due primarily to incomplete and inaccurate data on abandoned mines on land managed by the agency. BLM accounts for the largest number of contaminated sites and sites that need further investigation in Interior’s inventory. Table 1 shows the number of contaminated or potentially contaminated sites in BLM’s inventory as of April 2014, and the extent to which remediation measures had been undertaken or were completed. We reported in January 2015 that BLM had also identified 30,553 abandoned mine sites that posed physical safety hazards but needed verification or a preliminary assessment to determine whether environmental hazards were present. However, the number of potentially contaminated mines may be larger than these identified sites because BLM had not identified all of the abandoned mines on the land it manages. We reported that BLM estimated that there may be approximately 100,000 abandoned mines that had not yet been inventoried in California, Nevada, and Utah, and that it would take 2 to 3 years to complete the estimates for the other nine BLM states. BLM estimated that it will take decades to complete the inventory. To inventory a site, BLM field staff must visit the site to collect data, research the land ownership and extent of mining activity that occurred, and record the information in BLM databases. In January 2015, we reported that BLM has an ongoing effort to estimate the number of abandoned mines and mine features that have not yet been inventoried on BLM lands and the approximate cost to complete the inventory. BLM established inventory teams in several states to go out and identify sites. In addition, BLM began an initiative in California to determine the number of sites that need to be inventoried after the state provided the agency with digitized maps of potential mine sites and verified a sample of the sites. For California, BLM estimated that 22,728 sites and 79,757 features needed to be inventoried. BLM estimated that approximately 69,000 and 4,000 sites remained to be inventoried in Nevada and Utah, respectively, on BLM land. BLM officials told us that they expect to provide a report to Congress on the inventory work remaining in these three states in 2015. The nine remaining states with BLM land do not have the digital geographic data available that BLM used for California, Nevada, and Utah, according to BLM officials, making it difficult for BLM to develop similar estimates for these states. BLM officials told us that the U.S. Geological Survey was working on an effort to develop datasets similar to those used to estimate the number of abandoned mines on BLM land in California, Nevada, and Utah. We found that Interior’s Bureau of Indian Affairs, Bureau of Reclamation, Fish and Wildlife Service, and NPS also have sites with environmental contamination. Officials from each of these agencies told us that they believed their inventories of sites with environmental contamination were complete. Both Fish and Wildlife Service and NPS had identified locations of concern, where contamination is suspected based on known past activities or on observed and reported physical indicators requiring further assessment. For NPS, nearly half of these sites are old dump sites. NPS also has abandoned mines on the lands it manages. In 2013, NPS completed a system-wide inventory and assessment project to identify abandoned mines on lands it manages. NPS’s inventory identified 37,050 mine features at 3,421 sites on NPS land. In January 2015, we reported that, of the total inventory, NPS officials said they believed that 3,841 features at 1,270 sites still required some level of effort to address human health and safety and/or environmental concerns. As a result of NPS’ system-wide inventory, officials with the agency’s Abandoned Mineral Lands Program told us that they believed that their inventory of all potentially contaminated sites was largely complete. As we reported in our July 2010 report, before federal environmental legislation was enacted in the 1970s and 1980s regulating the generation, storage, treatment, and disposal of hazardous waste, DOD activities and industrial facilities contaminated millions of acres of soil and water on and near DOD properties in the United States and its territories. DOD activities released hazardous substances into the environment primarily through industrial operations to repair and maintain military equipment, as well as the manufacturing and testing of weapons at ammunition plants and proving grounds. In June 2014, DOD reported to Congress that it had 38,804 sites in its inventory of sites with contamination from hazardous substances or pollutants or contaminants at active installations, formerly used defense sites, and Base Realignment and Closure (BRAC) locations in the United States, as well as munition response sites that were known or suspected to contain unexploded ordnance, discarded military munitions, or munitions constituents., Of these 38,804 sites, DOD’s report shows that 8,865 have not reached the department’s response complete milestone—which occurs when a remedy is in place and required remedial action operations, if any, are complete. In May 2013, we reported that in addition to having a large number of contaminated and potentially contaminated sites in its inventory, of all federal agencies, DOD had the greatest number of sites listed on the NPL. We reported that, as of April 2013, DOD was responsible for 129 of the 156 federal facilities on the NPL at the time (83 percent). Also, we reported in March 2009 that the majority of DOD sites were not on the NPL and that most DOD site cleanups were overseen by state agencies rather than EPA, as allowed by CERCLA. Our work has found that the lack of interagency agreements between EPA and DOD has historically contributed to delays in cleaning up military installations. For example, we reported in July 2010 that, as of February 2009, 11 DOD installations did not have an interagency agreement, even with CERCLA’s requirement that federal agencies enter into interagency agreements with EPA within a certain time frame to clean up sites on the NPL, and even though the department had reached agreement with EPA on the basic terms. Without an interagency agreement, EPA does not have the mechanisms to ensure that cleanup by an installation proceeds expeditiously, is properly done, and has public input, as required by CERCLA. We found one DOD installation that, after 13 years on the NPL and receipt of EPA administrative cleanup orders for sitewide cleanup, had not signed an interagency agreement. We recommended that the Administrator of EPA take action to ensure that outstanding CERCLA section 120 interagency agreements are negotiated expeditiously. In May 2013, we reported that DOD had made progress on this issue by decreasing the number of installations without an interagency agreement from 11 to 2, but both of those sites still posed significant risks. According to an EPA official, as of September 2015, one of these two installations now has an interagency agreement. However, according to this official, there is no interagency agreement at the other installation— Redstone Arsenal in Alabama. We recommended that EPA pursue changes to a key executive order that would increase its authority to hasten cleanup at sites without an interagency agreement. EPA agreed but has not taken action to have the executive order amended. We also suggested in July 2010 that Congress consider amending CERCLA section 120 to authorize EPA to impose administrative penalties at federal facilities placed on the NPL that lack interagency agreements within the CERCLA-imposed deadline of 6 months after completion of the remedial investigation and feasibility study. We believe that this leverage could help EPA better satisfy its statutory responsibilities with agencies that are unwilling to enter into agreements where required under CERCLA section 120. As we reported in March 2015, 70 years of nuclear weapons production and energy research by DOE and its predecessor agencies generated large amounts of radioactive waste, spent nuclear fuel, excess plutonium and uranium, contaminated soil and groundwater, and thousands of contaminated facilities, including land, buildings, and other structures and their systems and equipment. DOE’s Office of Environmental Management (EM) is responsible for one of the world’s largest environmental cleanup programs, the treatment and disposal of radioactive and hazardous waste created as a by-product of producing nuclear weapons and energy research. The largest component of the cleanup mission is the treatment and disposal of millions of gallons of highly radioactive waste stored in aging and leak-prone underground tanks. In addition, radioactive and hazardous contamination has migrated through the soil into the groundwater, posing a significant threat to human health and the environment. According to DOE’s fiscal year 2016 congressional budget request, EM has completed cleanup activities at 91 sites in 30 states and in the Commonwealth of Puerto Rico, and EM has remaining cleanup responsibilities at 16 sites in 11 states. EM cleanup work activities are carried out by contractors, such as Washington River Protection Solutions, for the operation of nuclear waste tanks at the Hanford Site in Washington State. In March 2015, we reported that the National Nuclear Security Administration (NNSA), a separately organized agency within DOE, also manages many contaminated facilities. Some of these facilities are no longer in use, others are still operational. Once NNSA considers these facilities to be nonoperational, they may be eligible for NNSA to transfer to EM. We found that NNSA had identified 83 contaminated facilities at six sites for potential transfer to EM for disposition over a 25-year period, 56 of which were currently nonoperational. Until the sites are transferred to EM, however, NNSA is responsible for maintaining its facilities and incurring associated maintenance costs to protect human health and the environment from the risk of contamination. NNSA’s responsibilities may last for several years, or even decades, depending on when EM is able to accept the facilities. We found that as NNSA maintains contaminated nonoperational facilities, the facilities’ condition continues to worsen, resulting in increased costs to maintain them. As we reported in March 2015, EM has not accepted any facilities from NNSA for cleanup in over a decade. EM does not accept facilities for transfer until funding is available to carry out the decontamination and decommissioning work. In addition, EM officials told us that they also do not include facilities maintained by NNSA in their planning until they have available funding to begin cleanup work. We concluded that without integrating NNSA’s inventory of nonoperation facilities into its process for prioritizing facilities for disposition, EM may be putting lower-risk facilities under its responsibility ahead of deteriorating facilities managed by NNSA that are of greater risk to human health and the environment. We therefore recommended that EM integrate its lists of facilities prioritized for disposition with all NNSA facilities that meet EM’s transfer requirements and that EM should also include this integrated list as part of the Congressional Budget Justification for DOE. We also recommended that EM analyze and consider life cycle costs for NNSA facilities that meet its transfer requirements and incorporate the information into its prioritization process. Analyzing life cycle costs of nonoperational facilities shows that accelerating cleanup of some facilities, while others are maintained in their current states, could offer significant cost savings. DOE stated that it concurred with the issues identified in our report and described actions it plans to implement to address them. For example, DOE stated that it has formed a working group that may address GAO’s findings. The four departments reported allocating and spending millions of dollars annually on environmental cleanup. They also estimated future costs in the hundreds of millions of dollars or billions to clean up sites and address their environmental liabilities. We reported in January 2015 that the majority of USDA’s environmental cleanup funds are spent cleaning up ARS’s Beltsville NPL facility and abandoned mines and landfills on NFS lands, as well as mitigating potential groundwater contamination from activities at former CCC grain storage sites. In fiscal year 2013, USDA allocated over $22 million to environmental cleanup efforts. Specifically, USDA allocated (1) $3.7 million for department-wide cleanup projects, the majority of which were for cleanup at USDA’s Beltsville site and to cover legal expenses; (2) approximately $14 million for the Forest Service to conduct environmental assessments and cleanup activities; and (3) $4.3 million in funds to mitigate contamination at former grain storage sites. The Forest Service also allocated approximately $20 million in one-time Recovery Act funds in fiscal year 2009 to cleanup activities at 14 sites located on, or directly impacting, land managed by the Forest Service. In addition, USDA seeks cost recovery of cleanup costs and natural resource damages under CERCLA from potentially responsible parties, such as owners and operators of a site, to help offset cleanup costs at sites where they caused or contributed to contamination. Cost recovery amounts vary from year to year. We found that for fiscal years 2003 to 2013, USDA typically recovered $30 million or less annually. However, according to department documents, USDA successfully recovered over $170 million from a single mining company as part of a bankruptcy case in 2009. These funds were used to conduct cleanup activities at 13 mine sites located on NFS lands. In fiscal year 2011, USDA recovered $65 million from another mining company for restoration of injured natural resources in the Coeur d’Alene River Basin NPL site in Idaho. In its fiscal year 2013 financial statements, USDA reported a total of $176 million in environmental liabilities. These liabilities represent what USDA determined to be the probable and reasonably estimable future costs to address 100 USDA sites, as required by federal accounting standards. The $176 million amount included: $165 million to address asbestos contamination, $8 million for up to 76 CCC former grain storage sites in the Midwest that are contaminated with carbon tetrachloride, and $3 million for 24 Forest Service sites, including guard stations, work centers, and warehouses, among others. In addition, USDA reported $120 million in contingent liabilities in its fiscal year 2013 financial statements. Of this amount, $40 million was for environmental cleanup at four phosphate mine sites in southeast Idaho. We reported in January 2015 that Interior allocated about $13 million for environmental cleanup efforts in fiscal year 2013. Specifically, Interior allocated $10 million for cleanup projects department-wide; NPS allocated an additional $2.7 million, and the Fish and Wildlife Service allocated over $800,000 for environmental assessment and cleanup projects. In addition, BLM allocated more than $34 million to its hazardous management and abandoned mine programs. BLM provided over $18 million to its state offices; however, the amount specifically used for environmental cleanup projects was not readily available. BLM also spent over $27 million in one-time Recovery Act funds on physical safety and/or environmental remediation projects at 76 locations. According to BLM, there were 31 projects for environmental activities. For fiscal years 2003 through 2013, Interior allocated over $148 million in Central Hazardous Materials Fund (CHF) resources to its agencies to support response actions undertaken at contaminated sites under CERCLA. This amount includes over $49 million in CHF cost recoveries. Interior’s agencies undertook 101 projects with CHF funding during fiscal years 2003 through 2013. These projects supported a range of activities, from project oversight to advanced studies (e.g., remedial investigations, feasibility studies, engineering evaluations, and cost analyses) to removal and remedial actions. The majority of sites receiving CHF funding were abandoned mines, landfills, and former industrial facilities. In fiscal year 2013, Interior allocated $10 million to the CHF. During our work for the January 2015 report, BLM officials told us that the current funding levels were not sufficient to complete the inventory and address the physical and environmental hazards at abandoned mines. In its 2014 and 2015 budget justifications, Interior described proposals to charge the hardrock mining industry fees and use the funds to address abandoned mines. Similarly, an NPS official told us that the agency has inadequate funding to address its over 400 potentially contaminated and contaminated sites. According to an NPS official, the agency had been able to address its highest risk sites. If there is a very significant risk, NPS can usually obtain funds to address the portion of the site that has the highest risk, if not the site as a whole. According to NPS officials, NPS has not selected response actions for almost 300 sites because current funding levels are not sufficient to address them. As we found in our January 2015 report, Interior reported $192 million in environmental liabilities in its fiscal year 2013 financial statements. These liabilities represent what the agency has determined to be the probably and reasonably estimable future cost for completing cleanup activities at 434 sites, as required by federal accounting standards. These activities include studies or removal and remedial actions at sites where Interior has already conducted an environmental assessment and where Interior caused or contributed to the contamination or has recognized its legal obligation for addressing the site. Interior also disclosed in the notes to its financial statements the estimated cost range for completing cleanup activities at these sites. The cost range disclosed was approximately $192 million to $1.3 billion. Interior also disclosed the estimated costs for government-acknowledged sites—sites that are of financial consequence to the federal government with damage caused by nonfederal entities—where it was reasonably probable that cleanup costs would be incurred. In fiscal year 2013, Interior disclosed in the notes to its fiscal year 2013 financial statements a cost range for these activities to be approximately $62 million to $139 million. The majority of this cost range was related to addressing 85 abandoned mine sites. As we have previously reported, cleanup costs for abandoned mines vary by type and size of the operation. For example, the cost of plugging holes is usually small, but reclamation costs for large mining operations can reach tens of millions of dollars. Historically, we have found that DOD has spent billions on environmental cleanup and restoration at its installations. For example, in July 2010, we reported that DOD spent almost $30 billion from 1986 to 2008 across all environmental cleanup and restoration activities at its installations, including NPL and non-NPL sites. In March 2010, we reported that since the Defense Environmental Restoration Program (DERP) was established, approximately $18.4 billion had been obligated for environmental cleanup at individual sites on active military bases, $7.7 billion for cleanup at sites located on installations designated for closure under BRAC, and about $3.7 billion to clean up formerly used defense sites. In June 2014, DOD reported to Congress that, in fiscal year 2013, DOD obligated approximately $1.8 billion for its environmental restoration activities. In its Agency Financial Report for fiscal year 2014, DOD reported $58.6 billion in total environmental liabilities. These liabilities include, but are not limited to, cleanup requirements for DERP for active installations, BRAC installations, and formerly used defense sites. According to DOE’s fiscal year 2016 Congressional budget request, DOE received an annual appropriation of almost $5.9 billion in fiscal year 2015 to support the cleanup of radioactive and hazardous wastes resulting from decades of nuclear weapons research and production. DOE has estimated that the cost of this cleanup may approach $300 billion over the next several decades. As we reported in May 2015, DOE spent more than $19 billion since 1989 on the treatment and disposition of 56 million gallons of radioactive and hazardous waste at its Hanford site in Washington State. In July 2010, we reported that four large DOE cleanup sites received the bulk of the $6 billion in Recovery Act funding for environmental cleanup. We previously reported that those sites have had problems with rising costs, schedule delays, and contract and project management. In 2014, DOE estimated that its total liability for environmental cleanup, the largest component of which is managed by EM, is almost $300 billion and includes responsibilities that could continue beyond the year 2089. We are beginning work at the request of the Senate Armed Services Committee to examine DOE’s long-term cleanup strategy, what is known about the potential cost and timeframes to address DOE’s environmental liabilities, what factors does DOE consider when prioritizing cleanup activities across its sites, and how DOE’s long-term cleanup strategy address the various risks that long-term cleanup activities encounter. As part of its oversight role in maintaining the list of contaminated and potentially contaminated federal sites and ensuring that preliminary assessments of such sites are complete, EPA has compiled a docket of over 2,300 federal sites that may pose a risk to human health and the environment. EPA is responsible for ensuring that the federal agencies assess these sites for contamination. Our January 2015 report focused on reviewing the extent to which USDA and Interior have assessed the majority of sites listed on the docket. As of August 2015, the agency’s docket listing consisted of 2,323 sites that may pose a risk to human health and the environment, which EPA compiled largely from information provided by federal agencies. We found in January 2015 that EPA has published many updates of the docket, but the agency has not consistently met the 6-month reporting requirement. Prior to 2014, the effort to compile and monitor the docket listings was a manual process. However, in 2014, EPA implemented revised docket procedures with a computer-based process that is to compile potential docket listings from agency notices by searching electronic records. EPA officials said that they expect the new system to allow them to update the docket in a more timely way in the future. EPA has published two docket updates with this new system, in December 2014 and August of 2015. As we reported in January 2015, EPA officials told us that it is difficult for EPA to know about a site to list if agencies have not reported it. However, if EPA learns about a site that has had a release or threat of a release of hazardous substances through other means, EPA will list the site on the docket. It is important to note that the docket is a historical record of potentially contaminated sites that typically have been reported to EPA by agencies. Because it is a historical record, sites that subsequently were found to not be contaminated, and sites that the agencies may have addressed, are still included on the docket. In our January 2015 report on USDA and Interior potentially contaminated sites, we discussed the docket with officials from these two departments. We found that Interior and USDA officials disagreed with EPA officials over whether some of these sites should have been listed on the docket. Interior officials believed that CERCLA does not give EPA the discretion to list Interior sites unless Interior reports them to EPA and that EPA should limit its listing of sites on the docket to those reported by an agency under one of the provisions specifically noted in CERCLA. Interior and USDA officials also believed that abandoned mines should not be listed on EPA’s docket because the agencies did not cause the contamination and, therefore, the sites should not be considered federal sites. However, EPA officials believed that, regardless of whether USDA and Interior are legally liable for addressing these sites, they have an independent responsibility under Executive Order 12,580 and CERCLA as land management agencies owning the sites to address them. As I stated earlier, EPA established 18 months as a reasonable time frame for agencies to complete a preliminary assessment. However, in March 2009, we reported that EPA officials from two regions told us that some agencies such as DOD may take 2 to 3 years to complete a preliminary assessment because EPA does not have independent authority under CERCLA to enforce a timeline for completion of a preliminary assessment. In March 2009, we suggested that Congress consider amending CERCLA section 120 to authorize EPA to require agencies to complete preliminary assessments within specified time frames. For USDA and Interior, we found in our January 2015 report that as of February 2014, both Interior and USDA had conducted a preliminary assessment of the majority of their sites on EPA’s docket. However, EPA, Interior, and USDA have differing information on the status of preliminary assessments for the remaining docket sites. Our analysis of data in in EPA’s Comprehensive Environmental Response, Compensation, and Liability Information System for our January 2015 report found that USDA still needed to conduct a preliminary assessment at 50 docket sites, and Interior needed to conduct a preliminary assessment at 79 docket sites. When we reviewed the status of these sites with USDA and Interior officials, the officials told us that they believed their agencies had met the preliminary assessment requirement for many of these sites. To help resolve disagreements between EPA and USDA and Interior regarding which remaining docket sites require preliminary assessments, we recommended, in January 2015, that EPA take three actions. First, EPA should review available information on USDA and Interior sites where EPA’s Superfund Enterprise Management System indicates that a preliminary assessment has not occurred to determine the accuracy of this information, and update the information, as needed. After completing this review, EPA should inform USDA and Interior whether the requirement to conduct a preliminary assessment at the identified sites has been met or if additional work is needed to meet this requirement. Finally, EPA should work with the relevant USDA and Interior offices to obtain any additional information needed to assist EPA in determining the accuracy of the agency’s data on the status of preliminary assessments for these sites. EPA agreed with these recommendations and, according to EPA officials, the agency has started taking steps to address them. Chairman Shimkus, Ranking Member Tonko, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions you may have at this time. If you or your staff members have any questions about this testimony, please contact me at (202) 512-3841 or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Other individuals who made key contributions include: Barbara Patterson (Assistant Director), Antoinette Capaccio, Rich Johnson, Kiki Theodoropoulos, and Leigh White. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government owns over 700 million acres of land. Some of this land—which is primarily managed by USDA, Interior, DOD, and DOE—is contaminated with hazardous waste from prior uses, such as landfills and mining. To respond to problems caused by improper disposal of hazardous substances in the past, in 1980, Congress passed CERCLA, also known as Superfund. Among other things, CERCLA requires owners and operators of hazardous waste sites to notify the federal EPA—which manages the Superfund program—of the existence of their facilities, as well as known, suspected, or likely releases of hazardous substances. This testimony focuses on (1) numbers of contaminated and potentially contaminated federal sites for four departments; (2) spending and estimates of future costs for cleanup at these federal sites; and (3) EPA's role in maintaining the list of contaminated and potentially contaminated federal sites and ensuring that preliminary assessments of such sites are complete. This testimony is based on prior GAO reports issued from March 2009 through March 2015. The Departments of Agriculture (USDA), the Interior, Defense (DOD), and Energy (DOE) have identified thousands of contaminated and potentially contaminated sites on land they manage but do not have a complete inventory of sites, in particular, for abandoned mines. GAO reported in January 2015 that USDA had identified 1,491 contaminated sites and many potentially contaminated sites. However, USDA did not have a reliable, centralized site inventory or plans and procedures for completing one, in particular, for abandoned mines. For example, officials at USDA's Forest Service estimated that there were from 27,000 to 39,000 abandoned mines on its lands—approximately 20 percent of which may pose some level of risk to human health or the environment. GAO also reported that Interior had an inventory of 4,722 sites with confirmed or likely contamination. However, Interior's Bureau of Land Management had identified over 30,000 abandoned mines that were not yet assessed for contamination, and this inventory was not complete. DOD reported to Congress in June 2014 that it had 38,804 sites in its inventory of sites with contamination. DOE reported that it has 16 sites in 11 states with contamination. These four departments reported allocating and spending millions of dollars annually on environmental cleanup and estimated future costs in the hundreds of millions of dollars or more in environmental liabilities. Specifically: GAO reported in January 2015 that, in fiscal year 2013, USDA allocated over $22 million to environmental cleanup efforts and reported in its financial statements $176 million in environmental liabilities to address 100 sites. GAO reported in January 2015 that Interior in fiscal year 2013 allocated about $13 million for environmental cleanup efforts and reported $192 million in environmental liabilities in its financial statements to address 434 sites. In July 2010, GAO reported that DOD spent almost $30 billion from 1986 to 2008 across all environmental cleanup and restoration activities at its installations. In its fiscal year 2014 Agency Financial Report , DOD reported $58.6 billion in total environmental liabilities. DOE reported receiving an annual appropriation of almost $5.9 billion in fiscal year 2015 to support cleanup activities. In 2014, DOE estimated its total liability for environmental cleanup at almost $300 billion. As part of maintaining the list of contaminated and potentially contaminated federal sites, the Environmental Protection Agency (EPA) compiled 2,323 federal sites that may pose a risk to human health and the environment, as of August 2015, according to EPA officials. EPA is responsible for ensuring that federal agencies assess these sites for contamination and has established 18 months as a reasonable time frame for agencies to complete a preliminary assessment. However, in March 2009, GAO reported that according to EPA officials, some agencies, such as DOD, may take 2 to 3 years to complete an assessment and that EPA does not have independent authority under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) to enforce a timeline for completing the preliminary assessment. In March 2009, GAO suggested that Congress consider amending CERCLA section 120 to authorize EPA to require agencies to complete preliminary assessments within specified time frames. GAO is making no new recommendations. Previously, GAO made numerous recommendations to ensure that contaminated sites were identified and assessed, and some of these recommendations have not been fully implemented. GAO will continue to monitor implementation.
There are multiple statutes that concern health care fraud, including the following: The False Claims Act is often used by the federal government in health care fraud cases and prohibits certain actions, including the knowing presentation of a false claim for payment by the federal government. Civil monetary penalty provisions of the Social Security Act apply to certain activities, such as knowingly presenting a claim for medical services that is known to be false and fraudulent. In addition, the Social Security Act also provides for criminal penalties for knowing and willful false statements in applications for payment. The Anti-Kickback statute makes it a criminal offense for anyone to knowingly and willfully solicit, receive, offer, or pay any remuneration in return for or to induce referrals of items or services reimbursable under a federal health care program, subject to statutory exceptions and regulatory safe harbors. The Stark law and its implementing regulations generally prohibit physicians from making “self-referrals”—certain referrals for “designated health services” paid for by Medicare to entities with which the physician (or an immediate family member) has a financial relationship—nor may the entities that perform the “designated health services” present claims to Medicare or bill for these services. These prohibitions also extend to payments for Medicaid-covered services to the same extent and under the same terms and conditions as if Medicare had covered them. The Federal Food, Drug, and Cosmetic Act, as amended, makes it unlawful to, among other things, introduce an adulterated or misbranded pharmaceutical product or device into interstate commerce. Health care fraud takes many forms, and a single case can involve more than one fraud scheme. Schemes may include fraudulent billing for services not provided, services provided that were not medically necessary; and services intentionally billed at a higher level than appropriate for the services that were provided, called upcoding. Other fraud schemes include providing compensation—kickbacks—to beneficiaries or providers or others for participating in the fraud scheme and schemes involving prescription drugs (including prescription drugs that contain controlled substances), such as the submission of false claims for prescription drugs that have been improperly marketed for non- FDA-approved uses and the illicit diversion of prescription drugs for profit or abuse. Fraud cases may involve more than one scheme; for example, an infusion clinic may pay kickbacks to a beneficiary for receiving care at the clinic, and the care that was provided and billed for may not have been medically necessary. Providers may be complicit in the schemes or unaware of the schemes. For example, providers who are complicit may willingly use their provider identification information to bill fraudulently, misrepresent services provided to receive higher payment, or receive kickbacks to provide their identification information for others to bill fraudulently. In other cases, providers may be unaware that their identification information has been stolen and used in various fraud schemes. Similarly, beneficiaries can be either complicit in or unaware of the fraud. Beneficiaries who are complicit may willingly provide their identification information to a provider for the purposes of committing fraud or receive kickbacks in exchange for providing their information to or receiving services from a provider. In contrast, they also may be unaware of fraud schemes in which the provider bills for services not medically necessary or uses beneficiaries’ identification information without their knowledge. Additionally, both beneficiaries and some providers may not be involved in the fraud scheme, in the sense that the fraud schemes involved circumstances other than a provider giving care to a beneficiary. For example, a pharmaceutical manufacturing company that marketed prescription drugs for non-FDA approved uses does not involve a provider giving care directly to a beneficiary. Individuals and entities that commit fraud do so in federal health care programs and private insurance programs, and may commit fraud in more than one program simultaneously. Several agencies are involved in investigating and prosecuting health care fraud cases, including CMS; HHS OIG; DOJ’s U.S. Attorneys’ Offices, Civil and Criminal Divisions; and the FBI. HHS OIG and the FBI primarily conduct investigations of health care fraud, and DOJ’s divisions typically prosecute or litigate the cases. DOJ prosecutes fraud cases that affect both federal health programs and private health insurance. Amid concerns about identify theft, proposals have been put forward to replace Medicare’s paper identification cards that contain the beneficiary’s Social Security numbers with electronically readable cards, such as smart cards. Some proposals have suggested that such cards should be issued to providers as well. Electronically readable cards include those that store information on magnetic stripes and bar codes and cards called smart cards that use microprocessor chips to store and process data. In March 2015, we identified three key uses for electronically readable cards: (1) authenticating beneficiary and provider presence at the point of care, (2) electronically exchanging beneficiary medical information, and (3) electronically conveying beneficiary identity and insurance information to providers. We also found that smart cards could provide more rigorous authentication of beneficiaries and providers at the point of care than cards with magnetic stripes and bar codes, though all three types of cards can electronically convey identity and insurance information. Proponents of smart cards have suggested that, among other benefits, using smart cards may reduce health care fraud in the Medicare program. For example, some proponents claim that the use of smart cards to identify the beneficiary and provider at the point of care could potentially curtail certain types of fraud such as schemes in which providers misuse another provider’s information to bill fraudulently. However, our March 2015 report also found that there are several limitations associated with the use of smart cards. Specifically, it is possible that individuals may still be able to commit fraud by adapting and altering the schemes they use to account for the use of smart card technology. In addition, the use of smart card technology could introduce new types of fraud and ways for individuals to illegally access beneficiary information. For example, malicious software could be written onto a smart card and used to compromise provider IT systems. Further, various factors may limit the implementation of smart card technology in the Medicare program. As we found in our March 2015 report, while the use of smart cards to verify the beneficiary identity at the point of care could reduce certain types of fraud, it would have limited effect on Medicare fraud since CMS policy is to pay claims for Medicare beneficiaries even if they do not have a Medicare identification card at the time of care. CMS officials noted that it would not be feasible to require the use of smart cards because of concerns that this would limit beneficiaries’ access to care given that there may be legitimate reasons why a card might not be present at the point of care. For example, beneficiaries who experience a health care emergency may not have their Medicare cards with them at the time care is rendered. Additionally, we concluded that the use of smart cards to verify the beneficiary and provider presence at the point of care would require addressing costs and implementation challenges associated with card management and information technology system enhancements. These enhancements would be needed to update both CMS’s and providers’ claims processing and card management systems in order to achieve a high level of provider and beneficiary authentication as well as meet security requirements. The majority of the 739 cases resolved in 2010 that we reviewed had more than one fraud scheme. Fraudulent billing schemes, such as billing for services that were not provided and billing for services that were not medically necessary, were the most common fraud schemes. Over 20 percent of the cases included kickbacks to providers, beneficiaries, or other individuals. Providers were complicit in the fraud schemes in over half of the cases. In contrast with providers, only about 14 percent of the 739 cases we reviewed had beneficiaries who were complicit in the schemes. Using cases from 2010, we identified 1,679 fraud schemes in the 739 cases that we reviewed. The majority of the 739 cases (about 68 percent) included more than one scheme; 61 percent of the cases had 2 to 4 schemes, about 7 percent had 5 or more schemes. Thirty-two percent had only one scheme. The most common schemes used in the cases we reviewed were related to fraudulent billing, such as billing for services that were not provided (42.6 percent of cases), billing for services that were not medically necessary (24.5 percent), and upcoding, which is billing for a higher level of service than the service actually provided (17.5 percent). Additionally, schemes used to support other fraud were also common, such as falsifying a substantial portion of records to support the fraud scheme (25.2 percent) and paying kickbacks to participants in the scheme (20.6 percent). Schemes related to prescription drugs (including prescription drugs that contained controlled substances), such as fraudulently obtaining or distributing prescription drugs or marketing prescription drugs for non-FDA approved uses in order to commit fraud, were found in about 21 percent of the cases we reviewed. (See table 1 for the number and percentage of cases in which these schemes were used and app. II, table 6, for additional details on schemes we identified in cases.) Many different combinations of schemes were present in the 68 percent of cases with more than one scheme. The most common schemes were also the ones that were most often used together: billing for services not provided along with billing for services that were not medically necessary, billing for services or supplies that were not prescribed by a physician and falsifying a substantial portion of records in order to support the fraud scheme. (See app. II, table 7, for additional analysis of the number of schemes per case.) For example, according to the indictment in a fraud case we reviewed, a DME supplier used two schemes to commit fraud: (1) billing Medicare for medical equipment, such as orthotic braces, that were not provided to Medicare beneficiaries and (2) billing for supplies that had not been prescribed by physicians for these beneficiaries. Many different federal programs and private insurers were affected by fraud schemes in the cases we reviewed. In one-quarter of the cases, more than one program was affected. Medicare was affected in about 63 percent of the 739 cases reviewed, Medicaid and/or CHIP in about 32 percent, TRICARE in about 5 percent, and the Federal Employees Health Benefits Program (FEHBP) in 3 percent of the cases. In over 11 percent of the cases, private health insurers were affected. Other programs affected included Department of Veterans Affairs programs, Social Security programs, worker’s compensation programs, and other benefit plans. Among the fraud cases we reviewed, one-third—262 cases—had information in the documents we reviewed about the amount of fraudulent payments made by the programs and insurers. For the 262 cases, the total paid was $801.5 million. The amounts of the fraudulent payments in these cases typically ranged from $10,000 to $1.5 million. In about 20 percent of the 739 cases we reviewed, kickbacks were paid to providers, beneficiaries, or other individuals. The most common schemes used in cases where providers were paid kickbacks were marketing prescription drugs for non-FDA-approved uses, billing for services that were not medically necessary, upcoding, and self-referring. Many different types of providers received or provided kickbacks in these cases; the most common provider types were DME suppliers, hospitals, and pharmaceutical manufacturers. The most common schemes used in cases where beneficiaries were paid kickbacks were billing for services that were not medically necessary and billing for services that were not provided. In addition, kickbacks were paid to both beneficiaries and providers for their involvement in a fraud case or to other individuals, such as “recruiters,” who connect providers and beneficiaries in exchange for a fee. For 23 of the cases we reviewed, there was information in the documents we reviewed about the amount of kickbacks paid to beneficiaries, providers, and other individuals, which totaled $69.7 million. In about 62 percent of the 739 cases we reviewed, providers were complicit in the cases, either by submitting fraudulent claims or by supporting the fraud schemes. (See table 2 and app. II, table 8, for additional information on the role of the provider, by fraud scheme.) For example, a physician would be complicit when billing for higher level services than those actually provided in order to receive a higher payment rate (upcoding). A physician may also be complicit in a case by receiving kickbacks for referring beneficiaries to a particular clinic, even though the physician did not bill for the services provided by that clinic. Example of health care fraud case in which providers were complicit According to an indictment in one of the cases we reviewed, a physician conspired with the owner of a medical testing company that performed diagnostic ultrasound tests to bill Medicare and private insurance companies for tests that were either never provided or were not medically necessary. The physician signed orders for these ultrasound tests for beneficiaries that he had not actually treated and received kickbacks from the medical testing company for the orders. well as hospitals, other clinics, home health agencies, and pharmacies were the most common types of providers that were complicit. Providers were not complicit in about 10 percent of the cases we reviewed. In those cases, providers’ information had been stolen or used without their knowledge to carry out the fraud schemes. The most common schemes in these cases were falsifying records and billing for services or supplies that were not prescribed by the physicians. Additionally, in two cases, a fictitious provider was created to support the fraud schemes. Example of health care fraud case in which provider was not complicit According to a complaint, a DME supplier billed Medicare for supplies prescribed by a physician. However, those supplies were not prescribed by the physician the DME supplier had listed on the claims. During an interview with investigators, the physician indicated his practice was not to prescribe DME supplies to his patients and instead to refer them to a specialist. When reviewing a list of 200 Medicare beneficiaries for whom the DME supplier had listed him as the prescribing physician on the claims, the physician identified that only 12 of those listed had ever been his patients. In this case, the DME supplier was using the physician’s information without his knowledge to bill for DME supplies that were not provided. No provider was involved in another 10 percent of the cases that we reviewed. For example, no provider gave care directly or billed for services provided to a beneficiary in cases where a prescription drug manufacturer marketed prescription drugs for non-FDA-approved uses. In the remaining 18.5 percent of cases, we were unable to determine how the provider was involved as the court documents did not include this information. In contrast with providers, only about 14 percent of the 739 cases we reviewed had beneficiaries that were complicit in the schemes. For example, there were cases in which the beneficiary willingly provided identification information so a provider could fraudulently bill, or the beneficiary received kickbacks for receiving treatment at a specific clinic. Among the cases in which the beneficiary was complicit, the most common schemes were billing for services that were not medically necessary, billing for services that were not provided, and falsifying records to support the fraud schemes. Example of health care fraud case in which beneficiary was complicit According to an information document filed by prosecutors in one case we reviewed, an employee of a medical clinic asked a beneficiary to visit the clinic complaining of ailments that the beneficiary did not have in order to receive prescriptions for drugs containing controlled substances. The beneficiary visited the clinic complaining of a toothache and obtained a prescription for a controlled substance. The employee then purchased that medication from the beneficiary. In about 62 percent of the 739 cases we reviewed, beneficiaries were not complicit in the schemes. Among beneficiaries that were not complicit, most received services from the provider, but there was no evidence that the beneficiary was aware of the fraud (54.8 percent). For example, beneficiaries who were not complicit in the schemes received services from the provider but were unaware that the provider billed for upcoded services or that they received services that were not medically necessary. In 39 cases (5.3 percent), court documents we reviewed indicated that the beneficiaries’ information was stolen or sold without their knowledge. In an additional 12 cases (1.6 percent) we reviewed, the beneficiaries’ information was obtained through false pretenses, such as through a telemarketer. (See table 3 and app. II, table 9, for additional information on the role of the beneficiary, by fraud scheme.) Additionally, the beneficiary was not involved in about 13 percent of the 739 cases we reviewed. The beneficiary may not have been involved in the fraud schemes because the schemes did not involve billing for care provided to a beneficiary. For instance, in one case, a pharmaceutical drug manufacturer marketed drugs for non-FDA-approved uses and paid kickbacks to providers for prescribing those drugs to beneficiaries. This scheme did not involve billing for care provided to the beneficiary. For the remaining 11 percent of cases we reviewed, we were unable to determine whether the beneficiary was complicit or not, and in 1 case, a fictitious beneficiary’s information was created to support the fraud scheme. Among the 739 cases, we found 165 cases (22 percent) in which the entire case (2 percent) or part of the case (20 percent) could have been affected by the use of smart cards. The remaining 574 cases (78 percent) had schemes that would not have been affected by smart cards. (See fig. 1.) Example of health care fraud case in which the provider was complicit but the beneficiary was not According to a complaint document in one case we reviewed, the provider submitted duplicate claims for the same service provided to a beneficiary. The beneficiary received the service from the provider the first time but was unaware that a second claim had been submitted as if the service had been provided a second time when it had not. Example of health care fraud case in which neither the beneficiary nor the provider was complicit According to a complaint document in one fraud case we reviewed, a DME supplier used the identification information for several beneficiaries to submit a bill for DME supplies. The DME supplier also used a physician’s identification information to allege that the supplies had been prescribed when that physician had not prescribed the DME supplies. In this case, neither the beneficiaries nor the provider were aware of the fraud schemes. Among the 739 cases we reviewed, we found 165 cases in which the entire or part of the case could have been affected by the use of smart cards. These cases included at least one of six schemes smart cards could have affected as the schemes involved the lack of verification of the beneficiary or the provider at the point of care. These six schemes were (1) billing for services that were never actually provided and no legitimate services were provided; (2) misusing a provider’s identification information to bill fraudulently (such as using a retired provider’s identification information); (3) misusing a beneficiary’s identification information to bill fraudulently (such as using a deceased beneficiary’s identification information or stealing a beneficiary’s information); (4) billing more than once for the same service (known as duplicate billing) by altering a small portion of the claim, such as the date, and resubmitting it for payment; (5) providing services to ineligible individuals; and (6) falsifying a substantial part of the records to indicate that beneficiaries or providers were present at the point of care. In 18 cases (2.4 percent of all cases resolved in 2010 that we reviewed), the entire case could have been affected because all of the schemes on those cases involved the lack of verification of the beneficiary or provider at the point of care. For these 18 cases, either the beneficiary or the provider was complicit in the scheme, while the other was not, or neither the beneficiary nor the provider was complicit in the scheme. The use of smart cards could have had an effect because the card would have been able to verify at least one identity. Example of health care fraud case that may have been partially affected by the use of smart cards According to a complaint in one case we reviewed, a physical therapy provider was billing for services that were not medically necessary and was submitting duplicate bills for the same service. This case could have been partially affected by the use of smart cards, as the smart card would have verified that the beneficiary was present for only one service in which a duplicate bill was submitted but would not have affected the ability of the provider to bill for services that were not medically necessary. Smart cards could have partially affected an additional 147 cases (19.9 percent) in which at least one of the six schemes was present. However, because other fraud schemes were used, the entire case would not have been affected. (See table 4.) Smart card technology would not have affected the majority of fraud schemes we identified, which represented 574 of the 739 cases that we reviewed (78 percent). In these instances, the schemes would not have been affected by the smart cards because although the beneficiary and provider were present at the point of care, the provider misrepresented the services rendered after the smart cards would have registered their identities. These schemes included the following: billing for services that were not provided along with services that billing for services that were not medically necessary, unbundling of services, billing for services that were not prescribed or not referred by a billing for services as if they were provided by a physician to receive a higher payment rate when they were actually provided by another provider in which the payment rate would have been lower. In these schemes, smart cards would not be able to detect that the provider misrepresented the actual services provided even if the cards verified the beneficiary’s and provider’s presence. Similarly, schemes that involved a provider misrepresenting eligibility to provide services would not have been affected by smart cards, including schemes in which bills were submitted for services provided by an excluded provider or by an unlicensed, uncertified, or ineligible provider. Many of these schemes involved health care entities that billed for services provided by employees or contractors that were not licensed or were excluded from providing care. In addition, smart card technology would not have affected schemes in which the beneficiary was not present or the verification of the beneficiary and provider was not relevant to the scheme. These fraud schemes involved improper marketing of prescription drugs, including drugs for non-FDA-approved uses; misbranding prescription drugs; inflating prescription drug prices; and physician self-referrals. In addition, smart cards would not have affected schemes related to improperly obtaining or distributing prescription drugs (including drugs that contained controlled substances), regardless of whether the beneficiary’s or provider’s identity was verified, such as cases in which individuals visited multiple providers complaining about pain to obtain prescriptions. Further, smart cards would not have had an effect on cases in which the beneficiary and provider were complicit in the scheme, regardless of the schemes used on the case. For instance, smart cards would not have an effect on the billing for services never provided if both the beneficiary and provider were willing participants in the scheme. Similarly, smart cards would not have an effect on cases in which kickbacks were paid to a beneficiary or to a provider that allowed his or her smart card to be used for fraud. HHS and DOJ provided technical comments on a draft of this report, which we have incorporated as appropriate. In its comments, HHS reiterated that it would be difficult for CMS to implement smart cards in the Medicare program because implementation would require significant changes. For example, CMS stated that it would need to require that Medicare beneficiaries present smart cards at the point of care, which is contrary to current CMS policy and which CMS believes could create access to care issues. Additionally, CMS officials noted that implementing smart cards in Medicare would be a significant business process change, requiring substantial resources and time to implement. This report, as well as our past work on smart cards in Medicare, recognizes the concerns raised by CMS. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Attorney General, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or kingk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix III. This appendix provides details on the methodology we used to describe the types of health care fraud and their prevalence among cases resolved in 2010 that we reviewed. To describe types of health care fraud, we reviewed our prior reports, as well as reports from the Department of Health and Human Services (HHS) Office of Inspector General (OIG) and the Department of Justice (DOJ) to develop a list of schemes and definitions for these schemes, and then reviewed cases resolved in 2010 that we obtained through the course of work for our 2012 report. Specifically, we reviewed several government reports, such as reports produced by HHS and DOJ on the Health Care Fraud and Abuse Control Program, and DOJ and HHS OIG press releases to identify fraud schemes that were commonly included in the reports and to develop definitions for these schemes. See table 5 for the health care fraud schemes developed for our case review. Using the list of fraud schemes identified, we reviewed court documents for the health care fraud cases resolved in 2010 to determine the prevalence of health care fraud schemes. The data we obtained for the 2012 report were for fraud cases, including investigations and prosecutions, from HHS OIG and DOJ’s U.S. Attorneys’ Offices and Civil Division and included a variety of information such as information on the subjects of the fraud case and outcomes of the case (such as prison or probation). We obtained data from both HHS OIG and DOJ, as HHS OIG conducts investigations but DOJ does not prosecute all of the cases that are investigated. Also, because HHS OIG often works jointly with DOJ on fraud cases, for our 2012 report, we reduced duplication of fraud cases from the data we received from HHS OIG and DOJ by comparing subjects of the fraud cases that were in more than one data set we received. Although the cases we obtained for the 2012 report included investigations as well as prosecutions, judgments, and settlements, for this engagement, we included only cases that had been adjudicated favorably for the United States, meaning criminal cases in which the subjects were found guilty, pled guilty, or pled no contest to at least one of the charges, and civil cases that resulted in a judgment for the United States or a settlement. There were 834 cases that resulted in a favorable outcome for the United States, though we only reviewed 739 of these cases. We excluded 95 cases because they were duplicative of another case in our data set (18 cases), they were not health care fraud cases (21 cases), the data were insufficient to determine the fraud schemes used on the cases (15 cases), the cases were administrative actions rather than criminal or civil cases (9 cases), or we could not locate information on the cases, such as a court document or a press release, to determine the fraud schemes involved in the cases (32 cases). To determine the health care fraud schemes used in the 739 cases included in our report, we reviewed court documents associated with the charging stage of the case (such as indictment, information, or complaint) unless the charging document for a case was not available. We used court documents that we had previously obtained through our work on the 2012 report. For that report, we obtained court documents from the Public Access to Court Electronic Records (PACER) database for the DOJ cases. However, we did not have a charging document for all of the DOJ cases and did not have a charging document for any of the HHS OIG cases. As a result, we searched in PACER for charging documents for any cases for which we were missing a charging document. If the charging document was not available, we reviewed case details as described in a DOJ or Federal Bureau of Investigation (FBI) press release. For several HHS OIG cases, we were unable to locate a charging document or a press release and obtained other court documents, such as settlement agreements and plea agreements, from HHS OIG. When reviewing the court documents, we collected information on the health care fraud schemes that were used in the cases along with information about the beneficiary’s role, the provider’s role, whether a durable medical equipment supplier was involved, the programs that were affected by the fraud, and any monetary amounts associated with the fraud schemes (such as the amounts paid). For each case we reviewed, two reviewers independently categorized all information obtained for the case, including the relevant health care fraud schemes used on the case, and resolved any differences in the categorization. To assess the reliability of the data, we reviewed relevant documentation and examined the data for reasonableness and internal consistency. We found these data were sufficiently reliable for the purposes of our report. Tables 6 through 9 provide detailed information on health care fraud schemes for cases we reviewed, including whether the scheme was the only scheme in the case or used in combination with other schemes, the number of schemes used in cases, the role of the provider, and the role of the beneficiary. In addition to the contact named above, Martin T. Gahart, Assistant Director; Christine Davis; Laura Elsberg; Christie Enders; Matt Gever; Jackie Hamilton; Dan Lee; Elizabeth T. Morrison; and Carmen Rivera- Lowitt made key contributions to this report.
While there have been convictions for multimillion dollar schemes that defrauded federal health care programs, there are no reliable estimates of the magnitude of fraud within these programs or across the health care industry. In some fraud cases, individuals have billed federal health care programs or private health insurance by using a beneficiary's or provider's identification information without the beneficiary's or provider's knowledge. One idea to reduce the ability of individuals to commit this type of fraud is to use electronically readable card technology, such as smart cards. Proponents say that these cards could reduce fraud by verifying that the beneficiary and the provider were present at the point of care. GAO was asked to identify and categorize schemes found in health care fraud cases. This report describes (1) health care fraud schemes and their prevalence among cases resolved in 2010 and (2) the extent to which health care fraud schemes could have been affected by the use of smart card technology. GAO reviewed reports on health care fraud and smart card technology and reviewed court documents for 739 fraud cases resolved in 2010 obtained for a related 2012 GAO report on health care fraud. GAO is not making any recommendations. The Department of Health and Human Services and the Department of Justice provided technical comments on a draft of this report, which GAO incorporated as appropriate. GAO's review of 739 health care fraud cases that were resolved in 2010 showed the following: About 68 percent of the cases included more than one scheme with 61 percent including two to four schemes and 7 percent including five or more schemes. The most common health care fraud schemes were related to fraudulent billing, such as billing for services that were not provided (about 43 percent of cases) and billing for services that were not medically necessary (about 25 percent). Other common schemes included falsifying records to support the fraud scheme (about 25 percent), paying kickbacks to participants in the scheme (about 21 percent), and fraudulently obtaining controlled substances or misbranding prescription drugs (about 21 percent). Providers were complicit in 62 percent of the cases, and beneficiaries were complicit in 14 percent of the cases. GAO's analysis found that the use of smart cards could have affected about 22 percent (165 cases) of cases GAO reviewed in which the entire or part of the case could have been affected because they included schemes that involved the lack of verification of the beneficiary or provider at the point of care. However, in the majority of cases (78 percent), smart card use likely would not have affected the cases because either beneficiaries or providers were complicit in the schemes, or for other reasons. For example, the use of cards would not have affected cases in which the provider misrepresented the service (as in billing for services not medically necessary), or when the beneficiary and provider were not directly involved in the scheme (as in illegal marketing of prescription drugs).
Although Congress has not established mechanisms for regularly adjusting for inflation the fixed dollar amounts of civil tax penalties administered by IRS, it has done so for penalties administered by other agencies. When the Federal Civil Penalties Inflation Adjustment Act of 1990 (Inflation Adjustment Act) was enacted, Congress noted that inflation had weakened the deterrent effect of many civil penalties. The stated purpose of the 1990 act was “to establish a mechanism that shall (1) allow for regular adjustment for inflation of civil monetary penalties; (2) maintain the deterrent effect of civil monetary penalties and promote compliance with the law; and (3) improve the collection by the Federal Government of civil monetary penalties.” Congress amended the Inflation Adjustment Act in 1996 and required some agencies to examine their covered penalties at least once every 4 years thereafter and, where possible, make penalty adjustments. The Inflation Adjustment Act exempted penalties under the IRC of 1986, the Tariff Act of 1930, the Occupational Safety and Health Act of 1970, and the Social Security Act. As stated earlier, some civil tax penalties are based on a percentage of liability and therefore are implicitly adjusted for inflation. For example, the penalty for failure to pay tax obligations is 0.5 percent of the tax owed per month, not exceeding 25 percent of the total tax obligations. However, other civil penalties have fixed dollar amounts, such as minimums or maximums, which are not linked to a percentage of liability. For example, a minimum penalty of $100 exists for a taxpayer who fails to file a tax return. Adjusting civil tax penalties for inflation on a regular basis to maintain their real values over time may increase IRS assessments and collections. Based on our analysis, if the fixed dollar amounts of civil tax penalties had been adjusted for inflation, the increase in IRS assessments potentially would have ranged from an estimated $100 million to $320 million and the increase in collections would have ranged from an estimated $38 million to $61 million per year from 2000 to 2005, as shown in table 1. The majority of the estimated increase in collections from adjusting these penalties for inflation was generated from the following four types of penalties: (1) failure to file tax returns, (2) failure to file correct information returns, (3) various penalties on returns by exempt organizations and by certain trusts, and (4) failure to file partnership returns. The estimated increases in collections associated with these penalties for 2004 are shown in table 2. We highlight 2004 data because, according to IRS officials, approximately 85 percent of penalties are collected in the 3 years following the assessment. The same four penalty types account for the majority of the estimated increase in collections for the prior years. Our analysis showed that these four penalties would account for approximately 99 percent of the estimated $61 million in additional IRS collections for assessments made in calendar year 2004. Because penalty amounts have not been adjusted for decades in some cases, the real value of the fixed dollar amounts of these penalties has decreased. For example, the penalty for failing to file a partnership return was set at $50 per month in 1979, which is equivalent to about $18 today, or a nearly two-thirds decline in value, as shown in table 3. If the deterrent effect of penalties depends on the real value of the penalty, the deterrent effect of these penalties has eroded because of inflation. In addition, not adjusting these penalties for inflation may lead to inconsistent treatment of otherwise equal taxpayers over time because taxpayers penalized when the amounts were set could effectively pay a higher penalty than taxpayers with the same noncompliance pay years later. Finally, if the real value of penalties declines, but IRS’s costs to administer them do not, imposing penalties becomes less cost-effective for IRS and could lead to a decline in their use. In the past, Congress has established fixed penalty amounts, increased fixed penalty amounts, or both in order to deter taxpayer noncompliance with the tax laws. For example, the $100 minimum for failure to file a tax return was created in 1982 because many persons who owed small amounts of tax ignored their filing obligations. In addition, Congress increased penalties for failure to file information returns in 1982 because it believed that inadequate information reporting of nonwage income was a substantial factor in the underreporting of such income by taxpayers. As recently as 2006, IRS’s National Research Program confirmed Congress’s belief that compliance is highest where there is third-party reporting. Congress has also recently adjusted some civil penalties that have fixed dollar amounts. For example, the minimum penalty for a bad check was raised from $15 to $25 in May 2007, and the penalty for filing a frivolous return was raised from $500 to $5,000 in December 2006. We spoke with officials from offices across IRS whose workloads would be affected if regular adjustments of penalties occurred. IRS officials from all but one unit said that regularly updating the fixed dollar amounts of civil tax penalties would not be a significant burden. Officials from one relatively small office—the Office of Penalties—said that such adjustments might be considerable depending on the number of penalties being adjusted and would require a reprioritization of their work since their office would have lead responsibility for monitoring the administrative steps necessary to implement the adjustments and coordinating tasks among a wide range of functions within IRS. In addition, the limited number of tax practitioners we interviewed told us that the administrative burden associated with adjusting these penalties for inflation on a regular basis would be low. Officials from all but one unit we spoke to within IRS said that regularly adjusting civil tax penalties for inflation would not be burdensome. Some officials added that adequate lead time and minimally complex changes would reduce the administrative impact. For example, officials from the Office of Forms and Publications and the Office of Chief Counsel said that adjustments to civil penalty amounts would not affect their work significantly. While each office would have to address the penalty changes in documents for which they are responsible, in some cases these documents are updated regularly already. Similarly, officials responsible for programming IRS’s computer systems explained that these changes would not require out of the ordinary effort, unless they had little lead time in which to implement the changes. However, officials from the Office of Penalties within the Small Business/Self-Employed division (SB/SE)—the unit which would be responsible for coordinating IRS’s implementation of any adjustments to penalties among a wide range of functions within IRS—felt that the administrative burden associated with these changes might be considerable depending on the number of penalties being adjusted. The Office of Penalties, which currently consists of 1 manager and 10 analysts, provides policies, guidelines, training, and oversight for penalty issues IRS- wide, not just within SB/SE. When legislation affecting penalties is enacted, the Office of Penalties creates an implementation team that helps determine what IRS needs to do to implement the new legislation. In the case of adjusting penalties for inflation, the Office of Penalties would work with numerous other IRS units to coordinate the necessary changes to forms, training materials, computer systems, and guidance, among other things. Regularly changing four penalties would take less effort than regularly changing all penalties. In addition, the ability of the office to make these changes would require reprioritization of its work or receiving more resources. While the Office of Penalties has not done a formal analysis of the resources needed, an official stated that the additional work would not require a significant increase in staffing, such as a doubling of the size of the office. As a result, the amount of additional resources necessary for the penalty adjustment do not appear to be of sufficient scale to have a large impact on IRS overall. Further, officials we interviewed from other IRS units who would perform the work described by the Office of Penalties said that the administrative burden would not be significant for them. Some IRS officials who oversee the implementation of other periodic updates to IRS databases and documents said that the legislative changes requiring regular updates are most burdensome initially but become less of an issue in each subsequent year. Some officials also said that with enough advance notice, they would be able to integrate the necessary changes into routine updates. For example, program changes could be integrated into the annual updates that some Modernization and Information Technology Service programs receive. Other areas in IRS, such as the Office of Forms and Publications, already conduct annual and in some cases quarterly updates of their forms, and according to officials, a change to the tax penalty amount could easily be included in these regularly scheduled updates. IRS has a variety of experiences that may provide guidance that would be relevant to adjusting civil tax penalties with fixed dollar amounts for inflation. IRS has extensive procedures for implementing statutory changes to the tax code. Further, IRS has experience implementing inflation adjustment calculations. For example, tax brackets, standard deduction amounts, and the itemized deduction limit are among the inflation adjustments conducted annually by IRS. In addition, the administrative changes associated with regular updates to the interest rate have some similarities to the types of changes that an inflation adjustment may require. For example, the Office of Chief Counsel issues quarterly guidance on interest rates and the Communications & Liaison Office provides regular updates on interest changes to the tax professional community, including practitioner associations. Changes to the civil tax penalty fixed dollar amounts could be handled in a similar manner. The limited number of tax practitioners that we spoke with also expected the impact on their work from adjustments to the fixed dollar amounts of civil tax penalties for inflation to be relatively low. For example, one tax practitioner said that he expected to spend more time explaining different penalty amounts to clients, particularly in situations where taxpayers who receive the same penalty in different tax years may not understand why different penalty amounts were applied. In addition, three other practitioners we spoke with said that the changes may lead to an increased reliance on software programs that tax preparers often use to assist them with determining penalty amounts since making the calculations involving inflation adjustments could become more onerous for the tax practitioners to do without software. The real value and potential deterrent effect of civil tax penalties with fixed dollar amounts has decreased because of inflation. Periodic adjustments to the fixed dollar amounts of civil tax penalties to account for inflation, rounded appropriately, may increase the value of collections by IRS, would keep penalty amounts at the level Congress initially believed was appropriate to deter noncompliance, and would serve to maintain consistent treatment of taxpayers over time. Regularly adjusting the fixed dollar amounts of civil tax penalties for inflation likely would not put a significant burden on IRS or tax practitioners. Congress should consider requiring IRS to periodically adjust for inflation, and round appropriately, the fixed dollar amounts of the civil penalties to account for the decrease in real value over time and so that penalties for the same infraction are consistent over time. On July 30, 2007, we sent a draft of this report to IRS for its comment. We received technical comments that have been incorporated where appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies of this report to appropriate congressional committees and the Acting Commissioner of Internal Revenue. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-9110 or at brostekm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. To determine the potential effect that adjusting civil tax penalties for inflation would have on the dollar value of penalty assessments and collections, we used the Consumer Price Index-Urban (CPI-U) to adjust actual penalty assessment and collection information contained in the Enforcement Revenue Information System (ERIS), which was created to track Internal Revenue Service (IRS) enforcement revenues. We provided inflation-adjusted estimates for penalties that had been assessed for at least $1 million in any one year from 2000 to 2005 and had either a fixed minimum or set amount. This excluded less than two one- hundredths of a percentage of all assessments each year. In addition, we assumed that assessment rates and collection rates would stay the same regardless of penalty amount. This assumption may bias our estimates upwards because higher penalties may encourage taxpayers to comply with tax laws and, therefore, IRS would not assess as many penalties. However, improved compliance could also increase revenues. For collections, we assumed that a particular collection would increase the inflation-adjusted penalty amount only if the unadjusted penalty assessment had been paid in full. For example, if a taxpayer paid $50 of a $100 penalty assessment, we assumed that the $50 collected was all that would have been collected even with a higher assessment, and therefore did not adjust the collection amount. We made this assumption in order to avoid overstating the effect that adjusting penalties for inflation would have on collections because the data did not tell us why a penalty was partially collected. To the extent that taxpayers who paid the unadjusted penalty amount in full would not pay the adjusted penalty amount in full, our estimates would overstate additional collections. One reason for a partial collection is that it is all the taxpayer can afford. We did not include penalties that are percentage based but have a fixed maximum in our inflation adjustments. Two penalty categories in the ERIS data set that we received have fixed maximums and had total assessments of over $1 million for at least 1 year from 2000 to 2005. In both cases, we could not determine how much a penalty assessment for the current maximum would have risen if the maximum had been higher. However, we estimated an upper bound for the potential increase in collections due to adjusting the maximums for inflation by assuming that penalties assessed at the current maximum would have increased by the full rate of inflation. As a result, we concluded that at most, collections would have risen by approximately $196,000 over the years 2000 to 2005 if these maximums had been adjusted for inflation. We also did not include penalties that are based solely on a percentage of tax liability in our analysis because they are implicitly adjusted for inflation. The data contained in the ERIS database were reliable for our purposes, but some limitations exist. To assess the reliability of the data, we reviewed relevant documentation, interviewed relevant IRS officials, and performed electronic data testing. One limitation of the ERIS data is that it does not include penalties that are self-assessed and paid at the time of filing. IRS officials estimated that this is about 6 to 7 percent of all penalty assessments, but that a large majority of these are percentage based with no fixed dollar amount. For example, many people self-assess and pay the penalty for withdrawing money from their Individual Retirement Accounts early. Further, IRS officials acknowledged that some penalties were incorrectly categorized in the database making it impossible for us to determine which penalties were being assessed. We determined that 0.4 percent to 1.4 percent of assessments per year from 2000 to 2005 were incorrectly categorized. For example, in 2000, over $144 million in assessments and over $28 million in collections were incorrectly categorized. In 2005, over $343 million in assessments and over $86 million in collections were incorrectly categorized. These two limitations may bias our estimates downwards. The federal government produces several broad measures of price changes, including the CPI-U and the Gross Domestic Product (GDP) price deflator. The CPI-U measures the average change over time in the prices paid by consumers for a fixed market basket of consumer goods and services. The GDP price deflator measures changes over time in the prices of broader expenditure categories than the CPI-U. We used the CPI-U for the purposes of this analysis because it is used currently in the tax code to make inflation adjustments to several provisions, such as the tax rate schedule, the amount of the standard deduction, and the value of exemptions. To determine the likely effect that regularly adjusting penalties for inflation would have on the administrative burden of IRS officials, we interviewed officials in offices across IRS who would be affected if regular adjustments of penalties occurred. These offices are the Office of Penalties within the Small Business/Self Employed division (SB/SE); Learning and Education within SB/SE; Wage and Investment division (W&I); Tax Exempt/Government Entity division; Large and Mid-Size Business division; Research, Analysis and Statistics division; Legislative Analysis Tracking and Implementation Services; Office of Chief Counsel; Business Forms and Publications within W&I; Enforcement Revenue Data; Communications and Liaison; and Modernization and Information Technology Services, including officials who work on the Business Master File, the Financial Management Information System, the Automated Trust Fund Recovery system, Report Generation Software, Automated Offers in Compromise, Penalty and Interest Notice Explanation, Integrated Data Retrieval System, and the Payer Master File Processing System. To determine the likely effect that regularly adjusting penalties for inflation would have on the administrative burden of tax practitioners, we interviewed tax practitioners affiliated with the American Institute of Certified Public Accountants, the National Association of Enrolled Agents, the National Society of Tax Professionals, and the American Bar Association. In total, we spoke with 28 practitioners. Results from the nongeneralizable sample of practitioners we selected cannot be used to make inferences about the effect of regular adjustments of penalties on the work of all tax practitioners. Additionally, those we spoke with presented their personal views, not those of the professional associations through which they were contacted. We conducted our work from September 2006 through July 2007 in accordance with generally accepted government auditing standards. In addition to the contact named above, Jonda Van Pelt, Assistant Director; Benjamin Crawford; Evan Gilman; Edward Nannenhorn; Jasminee Persaud; Cheryl Peterson; and Ethan Wozniak made key contributions to this report.
Civil tax penalties are an important tool to encourage taxpayer compliance with the tax laws. A number of civil tax penalties have fixed dollar amounts--a specific dollar amount, a minimum or maximum amount--that are not indexed for inflation. Because of Congress's concerns that civil penalties are not effectively achieving their purposes, we agreed to (1) determine the potential effect of adjusting civil tax penalties for inflation on the Internal Revenue Service's (IRS) assessment and collection amounts and (2) describe the likely administrative impact of regularly adjusting civil tax penalties on IRS and tax practitioners. GAO examined IRS data on civil tax penalties and conducted interviews with IRS employees and tax practitioners. Adjusting civil tax penalties for inflation on a regular basis to maintain their real values over time may increase IRS collections by tens of millions of dollars per year. Further, the decline in real value of the fixed dollar amounts of civil tax penalties may weaken the deterrent effect of these penalties and may result in the inconsistent treatment of taxpayers over time. If civil tax penalty fixed dollar amounts were adjusted for inflation, the estimated increase in IRS collections would have ranged from $38 million to $61 million per year from 2000 to 2005. Almost all of the estimated increase in collections was generated by four penalties. These increases result because some of the penalties were set decades ago and have decreased significantly in real value--by over one-half for some penalties. According to those we interviewed, the likely administrative burden associated with adjusting the fixed dollar amounts of civil tax penalties for inflation on a regular basis would not be significant for IRS and would be low for tax practitioners. However, officials from the Office of Penalties, a relatively small office that would be responsible for coordinating the required changes among multiple IRS divisions, said that such adjustments might be considerable depending on the number of penalties being adjusted and would require a reprioritization of work. IRS officials said that the work required would be easier to implement with each subsequent update.
OSD has not issued a policy, nor has DOD developed doctrine, to address exposures of U.S. troops to low levels of chemical warfare agents on the battlefield. DOD officials explained that low-level exposures were not addressed because there was no validated threat and no consensus on what constituted low-level exposures or whether they produced adverse performance or health effects in humans. Nevertheless, some entities within DOD are preparing chemical defense strategies and developing technologies that are expected to address low-level exposures. OSD has not issued a policy on the force protection regarding low-level chemical weapon agent exposures, and DOD has not developed doctrine that addresses low-level exposures to chemical warfare agents, either in isolation or combination with other contaminants that would likely be found on the battlefield. DOD officials have characterized the primary intent of existing NBC doctrine for battlefield management as enabling mission accomplishment by ensuring force preservation rather than force protection. The operational concept that underlies NBC doctrine and drives chemical warfare defense research, development, and acquisition has been to “fight through” the chemical and biological threat and accomplish the mission, with the assumption that overwhelming conventional capabilities will enable U.S. forces to prevail on the battlefield. Thus, the focus on massive battlefield chemical weapon use has framed the concepts of the role of chemical and biological defense in warfare. In a battlefield scenario, the NBC defense goal is to ensure that chemical exposures to the troops result in less than 1 percent lethalities and less than 15 percent casualties, enabling the affected unit to remain operationally effective. Nevertheless, DOD doctrine differentiates between possible high-level chemical warfare threats in foreign battlefield scenarios and low-level chemical exposures in domestic chemical weapon storage and destruction facilities. In a domestic chemical storage scenario, facilities and procedures are required to ensure that unprotected workers would receive no more than an 8-hour occupational exposure limit and that the adjacent civilian population would receive no more than a 72-hour general population limit, both of which are not expected to result in any adverse health effects. According to DOD, its doctrine does not address low-level exposures on the battlefield because there is no (1) validated threat, (2) definition of low-level exposures, (3) or consensus on the effects of such exposures. Moreover, if low-level exposures were to be addressed, DOD officials said that the cost implications could be significant. For example, increased costs could result from the need for more sensitive chemical detectors, more thorough decontamination systems, or more individual and collective protection systems. However, no studies have been done to evaluate the potential cost implications of expanding policy and doctrine to address low-level exposure concerns for force protection. OSD officials said that any future low-level requirements would need to compete for funds with an existing list of unfunded chemical and biological defense needs. In October 1997, the Presidential Advisory Committee on Gulf War Veterans’ Illnesses noted that existing DOD doctrine addresses only exposure to debilitating or lethal doses of nerve or mustard chemical warfare agents on the battlefield. The Committee subsequently recommended that DOD develop doctrine that addresses possible low-level subclinical exposure to chemical warfare agents. Specifically, the Committee recommended that DOD’s doctrine establish requirements for preventing, monitoring, recording, reporting, and assessing possible low-level chemical warfare agent exposure incidents. In his February 1998 testimony before the House Committee on Veterans’ Affairs, the Special Assistant to the Deputy Secretary of Defense for Gulf War Illnesses stated that DOD does not believe there is a need for doctrine concerning low-level chemical exposures but that DOD would consider taking action if research indicates a need for such doctrine. DOD officials said that there is no validated low-level threat and that the probability of encountering low-level contaminated conditions on the battlefield is minimal. If low-level chemical exposures were to occur, the officials stated that the exposures would likely be inadvertent and momentary—resulting from residual contamination after the use of high-dose chemical munitions. DOD experts on the storage and release of chemical warfare agents have asserted that only in a laboratory could agent dosages exist at a low concentration more than momentarily. Nevertheless, DOD has studied how the intentional use of low doses of chemical warfare agents could be used to achieve terrorist and military objectives. DOD raised concerns over the intentional use of low-level chemical warfare agents in its 1997 study, Assessment of the Impact of Chemical and Biological Weapons on Joint Operations in 2010, which analyzed the impact of state-sponsored terrorist attacks using chemical warfare agents. The study’s threat scenario, which was not validated by any intelligence agency, entailed chemical warfare agents being spread thinly, avoiding lethal levels as much as possible, for the purpose of stopping U.S. military operations and complicating detection and cleanup. The study found that massive battlefield use of chemical and biological weapons is no longer the most likely threat and that U.S. forces must be able to counter and cope with limited, localized chemical and biological attacks, including attacks delivered by asymmetrical means. This study exposed serious vulnerabilities to the U.S. power projection capabilities that could be exploited by the asymmetrical employment of chemical and biological weapons both in the United States and in foreign theaters of operation. The study also found that the U.S. intelligence capability to determine small-scale development and intent to use chemical or biological weapons, particularly for limited use, is inadequate. Shortfalls include insufficient ability to collect and assess indications and warnings of planned low-level chemical and biological attacks. The report concluded that OSD should significantly increase its level of attention to vulnerabilities posed by an enemy using asymmetrical and limited applications of chemical and biological weapons. The absence of an OSD policy or DOD doctrine on low-level exposures is partly attributable to the lack of a consensus within DOD on the meaning of low level. DOD officials responsible for medical chemical defense, nonmedical chemical defense, NBC doctrine, and NBC intelligence provided varying definitions of low-level exposure, including the Oxford Dictionary definition, no observable effects, sublethal, and 0.2 LD. Despite the differing responses, each one can be depicted as a location along the lower end of a chemical warfare agent exposure and effects continuum. (App. IV describes physiological effects from increasing levels of chemical warfare agent exposures.) Figure 1 shows that one end of the continuum is extremely high exposures that result in death, and the other end is no or minimal exposures that result in no performance or health effects. Between these extremes is a range of exposures and resulting effects. In addition to a lack of consensus on the definition or meaning of low-level exposures, there is a lack of consensus within DOD and the research community on the extent and significance of low-level exposure effects. These differences result from several factors. First, the chemical warfare agent dose-response curves can be quite steep, leading some DOD officials and researchers to question the concern over a very narrow range of sublethal dose levels. Second, the extrapolation of findings from studies on the effects of chemical warfare agent exposures in animals to humans can be imprecise and unpredictable. Third, the impacts of different methods of chemical warfare agent exposure, such as topical, injection, and inhalation, may result in varied manifestations and timings of effects, even with comparable concentrations and subject conditions. For example, many of the effects attributable to chemical warfare agent exposure are subjective and either do not occur or cannot be measured in many animal species. Fourth, the preponderance of information on the combined effects of low-level exposures is lacking. Nearly all research on low-level effects addresses single agents in isolation; defining low levels of an agent when present in combination with other battlefield contaminants has not been addressed. In addition, most research has involved single, acute exposures with observations made over several hours or days. Few studies have examined the possible long-term effects of continuous or repeated low-level exposures. Last, research is not yet conclusive as to what level of exposure is militarily or operationally significant. The impact of a specific symptom resulting from chemical warfare agent exposure may vary by the military task to be performed. For example, miosis (constriction of the eye’s pupil) may have a greater adverse impact on a pilot or a medical practitioner than a logistician. Nonetheless, the dose and effects data are only some of the many factors considered in risk analyses conducted by military commanders. DOD officials told us that trade-offs among competing factors are more often than not based on professional judgment of persons with extensive knowledge based on military and technical education, training, and experience rather than an algorithm with numerical input and output. Despite the lack of an OSD policy on low-level exposures, some elements within DOD have begun to address issues involving such exposures. In describing DOD’s NBC defense strategy for the future, the Chairman of the Joint Service Materiel Group noted that the presence of low levels of chemical warfare agents will be one of the factors to consider before sending U.S. troops to a contingency. Specifically, the future strategy will no longer be primarily shaped by the occurrence of mild physiological effects, such as miosis, but rather the possible long-term health effects to U.S. forces. Lessons learned from the Gulf War are reflected in DOD’s NBC defense strategy, which focuses on the asymmetrical threat. Gulf War Syndrome and low-level threats are identified as two of the concerns to be addressed in the future NBC defense strategy. The Group Chairman added that traditionally the de facto low-level definition has been determined by DOD’s technical capability to detect the presence of an agent. However, the Chairman stated that the low-level concept in future chemical defense strategies will need to be defined by the medical community and consider the long-term health effects of battlefield environments. The Joint Service Integration Group—an arm of the Joint NBC Defense Board that is responsible for requirements, priorities, training, and doctrine—is working with the services to create a joint NBC defense concept to guide the development of a coherent NBC defense program. One of the central tenets of the proposed concept is to provide effective force protection against exposure threats at the lower end of the continuum, such as those from terrorism and industrial hazards. Also, the proposed concept envisions a single process for force protection to provide a seamless transition from peacetime to wartime. Even though the levels and types of threat can differ, a single overall process can meet all joint force protection needs. Thus, the NBC joint concept will address threats against DOD installations and forces for both peacetime and military conflicts. In addition, the joint concept will provide a conceptual framework for defense modernization through 2010, but the specific programs and system requirements necessary for the implementation of the concept will not be articulated. The services are concurrently identifying NBC defense joint future operational capabilities to implement the joint concept. Several of these capabilities relate to low-level exposure, such as (1) improving detection limits and capabilities for identifying standard chemical warfare agents by 50 percent, (2) lowering detection sensitivity limits and detection response times for identifying standard chemical warfare agents by 50 percent, and (3) lowering detection response time for standard biological agents by at least 50 percent. Even in the absence of adopted joint force operational capabilities, DOD is incorporating low-level capabilities in the design of new chemical defense equipment. For example, the Joint Chemical Agent Detector, currently under development, is expected to provide an initial indication that a chemical warfare attack has occurred and detect low-level concentrations of selected chemical warfare agents. The detector will replace currently fielded systems that have a limited ability to provide warning of low-dose hazards from chemical warfare agents. The operational requirements for the detector specify that it will be able to detect low-level concentrations of five nerve agents and two blister agents. However, the low-level requirement necessitates trade-offs between the breadth of agents that the detector can identify and its ability to monitor low-level concentrations for a select few agents. Thus, the next-generation chemical warfare agent detector is expected to have a capability to detect lower chemical warfare agent concentrations in more locations. In the absence of policy—or additional research on low-level effects—it cannot be known whether the current, less capable detectors would have the appropriate capabilities to meet the requirements of a low-level exposure doctrine. Research on animals and humans conducted by DOD and others has identified some adverse psychological, physiological, behavioral, and performance effects of low-level exposure to some chemical warfare agents. Nonetheless, researchers do not agree on the risk posed by low-level exposures and the potential military implications of their presence on the battlefield, whether in isolation or in combination with other battlefield contaminants. DOD has no research program to address the remaining uncertainties regarding the performance and health effects of low-level exposures to chemical warfare agents; however, two new research initiatives are currently under consideration. The majority of the chemical warfare agent research has been on organophosphate nerve agents and related pesticides. At low doses, nerve agents produce a wide range of effects on the central nervous system, beginning with anxiety and emotional instability. Psychological effects in humans from nerve agent VX on skin have been noted earlier than physical effects (e.g., nausea and vomiting) or appeared in the absence of physical effects. The psychological effects were characterized by difficulty in sustaining attention and slowing of intellectual and motor processes. Doses considerably below the LD can degrade performance and alter behavior. These performance and behavioral effects have clear military implications because affected service personnel exposed to chemical warfare agents might not only lose the motivation to fight but also lose the ability to defend themselves and carry out the complex tasks frequently required in the modern armed forces. Moreover, the detrimental effects of exposure to single doses of nerve agents may be prolonged. Concern about low-level chemical warfare agent effects predate Operation Desert Storm. In the 1980s, the Air Force conducted research on the bioeffects of single and repeated exposures to low levels of the nerve agent soman due to concerns about the effects of low-level chemical agent exposures on vulnerable personnel—such as bomb loaders, pilots, and medical personnel—who may be required to work in low-level contaminated environments. The Air Force found that the nerve agent degraded performance on specific behavior tasks in the absence of obvious physical deficits in primates. Thus, even for extremely toxic compounds, such as organophosphate nerve agents, which have a steep dose-response curve, task performance deficits could be detected at low levels of exposure that did not cause any overt signs of physical toxicity. This research was unique because low-level exposures were thought at that time to be unlikely or unrealistic on the battlefield. Table 1 shows examples of research conducted or funded by DOD on the behavioral and performance effects of organophosphate nerve agents. The research examples reveal that sublethal exposures of an agent can have a variety of effects (depending on the species, exposure parameters, time, and combination of exposures) and produce measurable, adverse effects on physiology and behavior (both motor and cognitive performance). In our prior report on Gulf War illnesses, we summarized research on the long-term health effects of chemical warfare agents, which were suspected of contributing to the health problems of Gulf War veterans. The report cited research suggesting that low-level exposure to some chemical warfare agents or chemically related compounds, such as certain pesticides, is associated with delayed or long-term health effects. Regarding delayed health effects of organophosphates, we noted evidence from animal experiments, studies of accidental human exposures, and epidemiological studies of humans that low-level exposures to certain organophosphorus compounds, including sarin nerve agents to which some U.S troops may have been exposed, can cause delayed, chronic neurotoxic effects. We noted that, as early as the 1950s, studies demonstrated that repeated oral and subcutaneous exposures to neurotoxic organophosphates produced delayed neurotoxic effects in rats and mice. In addition, German personnel who were exposed to nerve agents during World War II displayed signs and symptoms of neurological problems even 5 to 10 years after their last exposure. Long-term abnormal neurological and psychiatric symptoms, as well as disturbed brain wave patterns, have also been seen in workers exposed to sarin in manufacturing plants. The same abnormal brain wave disturbances were produced experimentally in nonhuman primates by exposing them to low doses of sarin. Delayed, chronic neurotoxic effects have also been seen in animal experiments after the administration of organophosphate. In other experiments, animals given a low dosage of the nerve agent sarin for 10 days showed no signs of immediate illness but developed delayed chronic neurotoxicity after 2 weeks. Nonetheless, some DOD representatives in the research community have expressed considerable doubt that low-level exposures to chemical warfare agents or organophosphates pose performance and long-term health risks—particularly in regard to the likelihood that low-level exposures are linked to Gulf War illnesses. These doubts stem from the lack of a realistic scenario, the lack of adverse long-term health effects observed in studies of controlled and accidental human exposure or animal studies, and results that are viewed as incompatible with the principles of biology and pharmacology. Researchers we interviewed did agree that the work that has been done to date is lacking in several aspects, including (1) the effects of exposure to low levels of chemical warfare agents in combination with other agents or contaminants likely found on future battlefields; (2) extrapolation of animal models to humans; (3) the breadth of agents tested, types of exposure routes, and length of exposure; and (4) the military or operational implications of identified or projected low-level exposure effects. of soman while the agent is in the blood and before it can affect the central nervous system. Therefore, for each nerve agent there may be a threshold of exposure below which no effects will result. one DOD scientist, “Research can improve our understanding of the relationships among the many factors, such as effects, time of onset of effects, duration of effects, concentration, duration of exposure, dosage, and dose. Improved estimates of effects in humans resulting from exposure to chemical warfare agents are a requirement that has existed since World War I.” Consistent with that assessment, the Army’s Medical Research and Materiel Command is proposing a science and technology objective to establish a research program on the chronic effects of chemical warfare agent exposure. Because previous research efforts have emphasized the acute effects of high (battlefield-level) exposures, there is little information on the repeated or chronic effects of low-dose exposures. The Command’s research effort is in response to this lack of information and joint service requirements for knowledge of the effects on personnel in sustained operations in areas that may be chemically contaminated, thus creating the possibility of a continuous low-level exposure. Additionally, the Joint Service Integration Group has tasked a panel of experts to determine an accepted definition for low-level chemical warfare agent exposure. The panel has proposed a series of research efforts to the Joint NBC Defense Board to analyze the relationships among dose, concentration, time, and effects for the purpose of determining safe exposure levels for sustained combat operations. DOD has funded two National Academy of Sciences studies to support the development of a long-term strategy for protecting U.S. military personnel deployed to unfamiliar environments. These studies will provide guidance for managing health and exposure issues, including infectious agents; vaccines; drug interactions; stress; and environmental and battlefield-related hazards, such as chemical and biological agents. One study is assessing approaches and technologies that have been or may be used by DOD in developing and evaluating equipment and clothing for physical protection and decontamination. The assessment is to address the efficacy of current policies, doctrine, and training as they relate to potential exposures to chemical warfare agents during deployments. The second study is assessing technology and methods for detection and tracking of exposures to a subset of harmful agents. This study will assess tools and methods to detect, monitor, and document exposures to deployed personnel. These studies do not address issues of risk management; those will be the focus of a third study. Although DOD and congressional interest concerning the effects of low-level chemical exposure increased after events in the 1991 Gulf War, relatively limited funding has actually been expended or programmed in DOD’s RDT&E programs in recent years to address issues associated with low-level chemical exposure on U.S. military personnel. However, DOD has developed proposals to fund two low-level research efforts, which are under consideration for implementation. For fiscal years 1996 through 2003, DOD has been appropriated in excess of $2.5 billion for chemical and biological defense RDT&E programs. (See app. V for general DOD chemical and biological program funding allocations and trends for fiscal years 1990 through 2003). Fiscal year 1996 was the first time that RDT&E funding for all of DOD’s chemical and biological defense programs was consolidated into six defensewide program element funding lines. These program elements are (1) basic research, (2) applied research, (3) advanced technology development, (4) demonstration and validation, (5) engineering and manufacturing development, and (6) management support. Table 2 shows total actual and projected research funding by RDT&E program element for fiscal years 1996 through 2003. Three low-level research efforts—totaling about $10 million—were included in DOD’s fiscal year 1997 and 1998 chemical and biological defense RDT&E programs. These research efforts represented about 1.5 percent of the approximately $646 million in combined obligational authority authorized for chemical and biological defense RDT&E for these 2 fiscal years. Funding for the largest of the three—an $8-million effort in the fiscal year 1998 program that dealt with chemical sensor enhancements—was provided by the Conference Committee on DOD Appropriations. Another fiscal year 1998 effort—costing almost $1.4 million—involved the development of sensitive biomarkers of low-dose exposure to chemical agents. The remaining effort, included in the fiscal year 1997 program, developed in vitro and in vivo model systems to evaluate the possible effects of low-dose or chronic exposures to chemical warfare agents. This project cost approximately $676,000. DOD officials told us that these projects were not part of a structured program to determine the performance and health effects of low-level exposures. However, two elements within DOD have proposed multiyear research programs on low-level issues. DOD has requested funding for the U.S. Army Medical Research and Materiel Command’s science and technology objective on the chronic effects of chemical warfare agent exposure. If approved, this research program is projected to receive an average of about $2.8 million annually in research funds for fiscal years 1999 through 2003. The purpose of this undertaking would be to investigate the effects of low-dose and chronic exposure to chemical agents to (1) gain a better understanding of the medical effects of such exposure, (2) provide tools for a medical assessment of personnel, and (3) develop protocols for subsequent protection and treatment. Figure 2 reflects DOD’s programmed RDT&E funding for fiscal years 1999 through 2003 and shows the proposed science and technology objective in relation to other research program efforts. Another research program involving low-level chemical exposures will be proposed in the near future to the NBC Defense Board for approval. A panel of experts, tasked by DOD to study the issue of defining low-level and chronic chemical exposure, has proposed a series of research efforts to be undertaken over the next several years to address the definitional dilemma surrounding this issue. Funding levels for this effort have not been established. DOD’s current NBC policy and doctrine do not address exposures of U.S. troops to low levels of chemical warfare agents on the battlefield. NBC defense doctrine is focused on ensuring mission accomplishment through the prevention of acute lethal and incapacitating effects of chemical weapons and is not designed to maximize force protection from exposure to clinical and subclinical doses. Moreover, DOD has no chemical defense research plan to evaluate the potential performance effects of low-level exposures or the implications they may have for force protection. Even though research funded by DOD and others has demonstrated adverse effects in animal studies, the literature does not adequately address the breadth of potential agents; the combinations of agents either in isolation or in combination with battlefield contaminants; the chronic effects; animal-human extrapolation models; or the operational implications of the measured adverse impacts. We recommend that the Secretary of Defense develop an integrated strategy for comprehensively addressing force protection issues resulting from low-level chemical warfare agent exposures. The strategy should address, at a minimum, the desirability of an OSD policy on the protection of troops from low-level chemical warfare agent exposures; the appropriateness of addressing low-level chemical warfare agent exposures in doctrine; the need for enhanced low-level chemical warfare agent detection, identification, and protection capabilities; the research needed to fully understand the risks posed by exposures to low levels of chemical warfare agents, in isolation and in combination with other contaminants that would be likely found on the battlefield; and the respective risks, costs, and benefits of addressing low-level chemical warfare agent exposures within DOD’s chemical and biological defense program. In oral comments on a draft of this report, DOD concurred with our recommendation that the Secretary of Defense develop a “low-level” strategy but disagreed with the implied priority order. DOD stated that it is also concerned with force protection and the possible impact that low-level chemical agent exposures might have on a service member’s health and emphasized that a valid data-based risk assessment must serve as the foundation for any change in policy or doctrine. In addition, DOD provided us with updated plans and proposals to develop an overall requirements and program strategy for low-level chemical agent monitoring. DOD agreed that the absence of an OSD policy or a DOD doctrine on low-level exposures is partially attributable to the absence of a consensus within DOD on the meaning of low level. However, DOD expressed concern that we did not assert a working definition of low level as it might apply to a force projection or battlefield scenario. DOD disagreed with our selection of examples of low-level research illustrated in table 1, stating that the studies were more appropriately categorized as “low dose” rather than low level. Finally, DOD believed that we misinterpreted the report, Assessment of the Impact of Chemical and Biological Weapons on Joint Operation in 2010, by failing to understand that the asymmetrical application of chemical agents does not equate to “low level” for the purpose of producing casualties, but rather for the purpose of disrupting operations by the mere detectable presence of these agents at levels that may have no medical effects. In our recommendation, we listed a number of elements that should be addressed in developing such a strategy, but we purposely did not articulate a priority order beginning with research. Rather, we advocate that DOD develop a strategy to analyze policy, doctrine, and requirements based on existing information and to reassess policy, doctrine, and requirements as the results of a low-level research program are reported. We did not define low level in our report because the definition requires an interpretation of both exposure effects data and military risk and performance data—analyses best performed by DOD. Furthermore, because a consensus of the meaning or definition of low level is lacking, we find no basis for DOD’s characterization of the research examples in table 1 of the report as “low dose,” rather than “low level.” Regarding the 2010 Study, we disagree with DOD’s statement that there may not be medical effects for low-level chemical agents. Rather our work shows that low-level exposure can have medical effects that cannot only result in casualties, but also disrupt operations. The plan of action and low-level toxicological and technical base efforts provided by DOD did not fully address the strategy that the report discusses. The strategy will require a plan of action incorporating medical and tactical analyses, as well as the nonmedical research and development projects described by DOD. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to other congressional committees and the Secretary of Defense. We will also make copies available to others on request. If you have any questions concerning this report, please call me at (202) 512-3092. Major contributors to this report were Sushil Sharma, Jeffrey Harris, Foy Wicker, and Betty Ward-Zukerman. The scope of our study was limited to chemical defense and low-level exposures that may cause adverse effects on performance. To determine the extent to which low-level exposures are addressed in doctrine, we reviewed Department of Defense (DOD) documents and interviewed agency officials. We asked questions designed to elicit the treatment of low-level issues within the nuclear, biological, and chemical (NBC) doctrinal architecture (i.e., Joint Publication 3-11; field manuals; training circulars; and tactics, techniques, and procedures). After determining that low-level issues were not addressed in the war-fighting doctrine, we asked representatives of the doctrinal, intelligence, and research communities why low-level issues were not addressed and under what circumstances they would be addressed. To identify research on the performance effects of low-level exposure of chemical warfare agents, we reviewed relevant government and academic research (published and unpublished) and interviewed researchers within and outside of DOD. To identify relevant literature, we interviewed DOD officials currently responsible for prioritizing chemical and biological defense research needs. We also interviewed DOD researchers at the Army’s primary center of medical chemical defense research and development (the Army Medical Research Institute for Chemical Defense) and nonmedical chemical research and development (the Edgewood Research, Development, and Engineering Center at the Aberdeen Proving Ground). We interviewed staff at the laboratory used by the Air Force to conduct low-level exposure effects on animals before the Army was designated as executive agent for chemical defense and the Air Force’s effort ceased. We sought historic programmatic information from the Naval Medical Research and Development Command, which funded portions of the Air Force’s low-level animal studies. We monitored ongoing DOD-funded Gulf War illnesses research that addresses potential long-term health effects from low-dose or chronic chemical exposures. Last, we discussed current research with leading academics in the field. We reviewed the compilation of relevant low-level research literature to characterize coverage (variety and combinations of agents or contaminants), methodologies employed, and effects observed. These observations were discussed and validated in our interviews with researchers in chemical defense, both within and outside of DOD. In addition, we employed a research consultant from academia to review the literature to substantiate both the comprehensiveness of our compilation and the validity of our conclusions. To determine what portion of the chemical defense budget specifically addresses low-level exposures, we reviewed DOD documents and interviewed DOD program officials. We examined DOD planning and budget documents, including the NBC defense annual reports to Congress and joint service chemical and biological defense program backup books for budget estimates. In addition, we analyzed chemical defense-related data for fiscal years 1991 through 1999 contained in DOD’s Future Years Defense Program—the most comprehensive and continuous source of current and historical defense resource data—to identify annual appropriation trends and ascertain the level of funds programmed and obligated for research, development, test, and evaluation (RDT&E), as well as procurement, and the destruction of chemical munitions. We interviewed DOD officials to verify our observations about low-level efforts and to obtain information about potential programs currently being developed to expand DOD’s efforts to understand the effects of chronic and low-level exposure of chemical warfare agents on military personnel. We contacted the following organizations: Armed Forces Radiobiological Research Institute, Bethesda, Maryland; Defense Intelligence Agency, Washington, D.C.; DOD Inspector General, Washington, D.C.; Department of Energy, Washington, D.C.; Edgewood Research, Development, and Engineering Center, Aberdeen Proving Ground, Maryland; Israel Institute for Biological Research, Ness-Zonia, Israel; Joint Program Office, Biological Defense; Falls Church, Virginia; National Ground Intelligence Center, Charlottesville, Virginia; National Research Council, Washington, D.C.; Office of the Secretary of Defense, Washington, D.C.; Oregon Health Sciences University, Portland, Oregon; University of Texas Health Center at San Antonio, San Antonio, Texas; University of Texas Southwest Medical Center, Dallas, Texas; Air Force Armstrong Laboratory, Brooks Air Force Base, Texas; Air Force Research Laboratory, Wright-Patterson Air Force Base, Ohio; Army Chemical School, Fort McClellan, Alabama; Army Medical Research and Materiel Command, Frederick, Maryland; Army Medical Research Institute of Chemical Defense, Aberdeen Proving Navy Bureau of Medicine and Surgery, Washington, D.C.; and Walter Reed Army Institute of Research, Washington, D.C. We performed our review from September 1997 to May 1998 in accordance with generally accepted government auditing standards. No common names exist for these agents. The institutional structure and responsibilities for NBC defense research, requirements, and doctrine derive from provisions in the National Defense Authorization Act for Fiscal Year 1994. The act directed the Secretary of Defense to assign responsibility for overall coordination and integration of the chemical and biological program to a single office within the Office of the Secretary of Defense. The legislation also directed the Secretary of Defense to designate the Army as DOD’s executive agent to coordinate chemical and biological RDT&E across the services. The Joint NBC Defense Board, which is subordinate to the Under Secretary for Acquisition and Technology, provides oversight and management of the NBC defense program within DOD. The NBC Board approves joint NBC requirements; the joint NBC modernization plan; the consolidated NBC defense program objective memorandum; the joint NBC research, development, and acquisition plan; joint training and doctrine initiatives; and the joint NBC logistics plan. The Joint Service Integration Group and the Joint Service Materiel Group serve as subordinates to the NBC Board and execute several of its functions. Both groups are staffed with representatives from each of the services. The Joint Service Integration Group is responsible for joint NBC requirements, priorities, training, doctrine, and the joint modernization plan. The Joint Service Materiel Group is responsible for joint research, development, and acquisition; logistics; technical oversight; and sustainment. These two groups and the NBC Board are assisted by the Armed Forces Biomedical Research Evaluation Management Committee, which provides oversight of chemical and biological medical defense programs. The Committee is co-chaired by the Assistant Secretary of Defense for Heath Affairs and the Director, Defense Research and Engineering. Figure III.1 illustrates the relationships among the various organizations responsible for NBC defense. USD (A&T) ATSD (NCB) DATSD (CBM) Loss of consciousness, convulsions, flaccid paralysis (lack of muscle tone and an inability to move), and apnea (transient cessation of respiration) This appendix provides general information on the funding trends for DOD’s Chemical and Biological Defense Program for fiscal years 1990-97 and 1998-2003. Funding is shown in four categories: disposal, which includes the costs associated with the chemical stockpile disposal program; RDT&E; procurement; and operations and maintenance, including the costs for military personnel. After the end of the Cold War, DOD funding for chemical and biological programs increased from about $566 million in fiscal year 1990 to almost $1.5 billion in fiscal year 1997. These funds include all military services and the chemical munitions destruction program. Adjusted for inflation, the total program funding has more than doubled (see fig. V.1) over that period and is programmed to continue growing—peaking in fiscal year 2002 with a total obligational authority in excess of $2.3 billion (see fig. V.2). Agent that inhibits the enzyme acetylcholinesterase. Transient cessation of respiration. Symptoms as observed by a physician. Process based on perception, memory, and judgment. Effects resulting from a specific unit of exposure. Difficult or labored respiration. Waste material discharged into the environment. Lack of muscle tone and an inability to move. Gray unit of radiation. Kilogram. Median lethal dose. Milligram. Constriction of the pupil of the eye. Toxins that exert direct effects on nervous system function. Family of chemical compounds that inhibit cholinesterase and can be formulated as pesticides and nerve agents. Measures designed to preserve health and prevent the spread of disease. Nasal secretions. Manifestations of an exposure that are so slight as to be unnoticeable or not demonstrable. Microgram. Agent that produces vesicles or blisters. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) approach for addressing U.S. troop exposures to low levels of chemical warfare agents during the Gulf War, focusing on: (1) the extent to which the DOD doctrine addresses exposures to low levels of chemical warfare agents; (2) the extent to which research addresses the performance and health effects of exposures to low levels of chemical warfare agents, either in isolation or combination with other agents and contaminants that would be likely found on the battlefield; and (3) the portion of resources in DOD's chemical and biological defense research, development, test, and evaluation (RDT&E) program explicitly directed at low-level chemical warfare agent exposures. GAO noted that: (1) DOD does not have an integrated strategy to address low-level exposures to chemical warfare agents; (2) it has not stated a policy or developed a doctrine on the protection of troops from low-level chemical exposures on the battlefield; (3) past research indicates that low-level exposures to some chemical warfare agents may result in adverse short-term performance and long-term health effects; (4) DOD has no chemical defense research program to determine the effects of low-level exposures; (5) less than 2 percent of the RDT&E funds in DOD's chemical and biological defense program have been allocated to low-level issues in the last 2 fiscal years; (6) DOD's nuclear, biological, and chemical (NBC) doctrine is focused on mission accomplishment by maximizing the effectiveness of troops in a lethal NBC environment; (7) it does not address protection of the force from low-level chemical warfare agent exposures on the battlefield; (8) according to officials, DOD does not have a doctrine that addresses low-level exposures because there is no: (a) validated low-level threat; (b) consensus on the definition or meaning of low-level exposures; or (c) consensus on the effects of low-level exposures; (9) past research by DOD and others indicates that single and repeated low-level exposures to some chemical warfare agents can result in adverse psychological, physiological, behavioral, and performance effects that may have military implications; (10) the research, however, does not fully address the effects of low-level exposures to a wide variety of agents, either in isolation or combination with other agents and battlefield contaminants; chronic effects; reliability and validity of animal-human extrapolation models; the operational implications of the measured adverse impacts; and delayed performance and health effects; (11) during the last 2 fiscal years, DOD has allocated nearly $10 million, or approximately 1.5 percent of its chemical and biological defense RDT&E budget of $646 million, to fund research and development projects on low-level chemical warfare agent exposure issues; (12) however, these projects were not part of a structured DOD research program focused on low-level effects; and (13) DOD does not have a chemical and biological defense research program designed to evaluate the potential effects of low-level chemical warfare agent exposures, but funding is under consideration for two multiyear research programs addressing low-level effects.
Studies have shown that insured children are more likely than uninsured children to get preventive and primary health care. Insured children are also more likely to have a relationship with a primary care physician and to receive required preventive services, such as well-child checkups. In contrast, lack of insurance can inhibit parents from trying to get health care for their children and can lead providers to offer less intensive services when families seek care. Several studies have found that low-income and uninsured children are more likely to be hospitalized for conditions that could have been managed with appropriate outpatient care. Most insured U.S. children under age 18 have health coverage through their parents’ employment—62 percent in 1996. Most other children with insurance have publicly funded coverage, usually the Medicaid program. Medicaid—a jointly funded federal-state entitlement program that provides health coverage for both children and adults—is administered through 56 separate programs, including the 50 states, the District of Columbia, Puerto Rico, and the U.S. territories. Historically, children and their parents were automatically covered if they received benefits under the Aid to Families With Dependent Children (AFDC) program. Children and adults may also be eligible for Medicaid if they are disabled and have low incomes or, at state discretion, if their medical expenses are extremely high relative to family income. Before 1989, coverage expansions for pregnant women and children based on family income and age were optional for states, although many states had expanded coverage. Starting in July 1989, states were required to cover pregnant women and infants (defined as children under 1 year of age) with family incomes at or below 75 percent of the federal poverty level. Two subsequent federal laws further expanded mandated eligibility for children. By July 1991, states were required to cover (1) infants and children up to 6 years old with family income at or below 133 percent of the federal poverty level and (2) children 6 years old and older born after September 30, 1983, with family income at or below 100 percent of the federal poverty level. Since 1989, states have also had the option of covering infants with family income between 133 percent and 185 percent of the poverty level. States may expand Medicaid eligibility for children by phasing in coverage of children up to 19 years old more quickly than required, by increasing eligibility income levels, or both. The demographic analysis in this report, however, focuses on the group of children for whom coverage is mandated. The Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (P.L 104-193), also known as the Welfare Reform Act, substantially altered AFDC and Supplemental Security Income (SSI) but made relatively few changes to the Medicaid program itself. The law replaced AFDC with a block grant that allowed states to set different income and resource (asset) eligibility standards for the new program—Temporary Assistance for Needy Families (TANF)—than for Medicaid. To ensure continued health coverage for low-income families, the law generally set Medicaid’s eligibility standards at AFDC levels in effect July 16, 1996, thereby ensuring that families who were eligible for Medicaid before welfare reform continued to qualify, regardless of their eligibility for states’ cash assistance programs. The law tightened the criteria for children to qualify for disability assistance through SSI, thus tightening eligibility for Medicaid. In addition, the law restricted aliens’ access to benefit programs, including SSI, and Medicaid benefits that were conditional on receipt of SSI. State and local governments were given some flexibility in designing policies that governed aliens’ eligibility for TANF, Medicaid, and social services. In a recently released report, we studied the Welfare Reform Act and its impact on Medicaid and found that in the states we visited, most chose to continue to provide Medicaid coverage to previously covered groups. The Balanced Budget Act of 1997 (P.L. 105-33) restored SSI eligibility and the derivative Medicaid benefits to all aliens receiving SSI at the time welfare reform was enacted and to all aliens legally residing in the United States on the date of enactment who become disabled in the future. At the same time, states continued to have flexibility in implementing certain benefits policies for aliens. Current law allows states the option of providing Medicaid coverage to aliens who were legal permanent residents in the country before August 23, 1996. States also have the option of covering legal residents who arrived after August 22, 1996, once they have resided in the United States for 5 years. Illegal aliens are eligible only for emergency services under Medicaid. (See table 1.) The Balanced Budget Act also made two changes that directly affect children’s coverage in the Medicaid program. It gives states the option of providing 12 months of continuous eligibility to children without a redetermination of eligibility, thereby avoiding the problem of children frequently moving on and off Medicaid as their parents’ circumstances change. The act also allows states to extend Medicaid coverage to children on the basis of “presumptive eligibility” until a formal determination is made. Under this provision, certain qualified providers can make an initial determination of eligibility, based on income, that an individual is eligible. The individual is then required to apply formally for the program by the last day of the month following the month in which the determination of presumptive eligibility was made. Finally, the Balanced Budget Act created the Children’s Health Insurance Program (CHIP), a grant program for uninsured children, through which $20.3 billion in new federal funds will be made available to states over the next 5 years. CHIP has a number of implications for Medicaid. If a state chooses to offer coverage through a separate program, the state must coordinate activities with the Medicaid program to ensure that Medicaid-eligible children are enrolled in Medicaid. The Congressional Budget Office estimated that the “outreach effect” of CHIP will result in an additional $2.4 billion in Medicaid spending over the same 5 years due to increased enrollment of 460,000 Medicaid-eligible children each year. States may also use the grant funds to expand coverage under their state Medicaid programs to reach additional low-income children, increasing the number of children potentially eligible for Medicaid. Uninsured Medicaid-eligible children differ somewhat from those currently enrolled in Medicaid, and these differences can be used by states to focus their outreach and enrollment efforts. Overall, about 23 percent—or 3.4 million—of the 15 million children who were eligible for Medicaid were uninsured in 1996. Slightly over half of the Medicaid-eligible children are insured solely by Medicaid, while about 7 percent have both Medicaid and private coverage. The remainder have coverage through other public programs, such as the Civilian Health and Medical Program of the Uniformed Services (CHAMPUS) or the Indian Health Service. (See fig. 1.) Medicaid-eligible children who are uninsured have characteristics closer to Medicaid-eligible children who are privately insured than to those with Medicaid. They are disproportionately children of the working poor, Hispanic, and U.S.-born children of foreign-born parents or foreign-born, and they are more likely to live in the West and the South. Medicaid-eligible children are more likely to be uninsured if their parents work, if their parents are self-employed or employed by a small firm, or if they have a two-parent family. Children whose parents worked at all during the year—whether full-time, part-time, or for part of the year—are about twice as likely to be uninsured as those whose parents were unemployed. However, these children are more likely to be covered by employment-based insurance. The explanation for this apparent paradox is that children with employed parents are less likely to be covered by Medicaid, and employment-based coverage does not fully compensate for low rates of Medicaid participation. (See table 2.) Small firms are less likely than larger firms to offer health insurance; therefore, it is not surprising that children whose parents are self-employed or employed by small firms are less likely to be insured. This could suggest selective targeting of smaller firms in Medicaid outreach efforts, especially if it is known that they do not offer insurance. Half of all uninsured Medicaid-eligible children are in two-parent families, compared with only about 28 percent of those insured by Medicaid. The uninsured rate for Medicaid-eligible children is also higher in two-parent families than in single-parent families—30 percent compared with 18 percent. This again underscores that successful outreach efforts need to reach beyond the single unemployed mothers generally associated with both cash assistance programs and Medicaid. Among racial and ethnic groups, the proportion of uninsured Medicaid-eligible children, as well as the proportion enrolled in Medicaid, varies by racial and ethnic group. Among uninsured Medicaid-eligible children, Hispanics have the highest uninsured rate, while blacks are most likely to be enrolled in Medicaid. (See table 3 and table II.1 in app. II.) In 1996, almost 9 out of every 10 uninsured Medicaid-eligible children were U.S.-born, but many—over one-third—lived in immigrant families. In addition to the 11 percent who were immigrants, another one-quarter had at least one foreign-born parent. (See fig. 2.) The large number of children in immigrant families and the high proportion that are uninsured suggest that immigrant communities may be promising targets for outreach. (See table 4.) Over 70 percent of children in immigrant families are Hispanic, suggesting that outreach efforts be targeted to the Hispanic community as well as use Spanish-language outreach materials and applications. Medicaid eligibility criteria allow younger children with higher family income to enroll. Children under 6 years old in families with income at or below 133 percent of the federal poverty level are eligible for the program, according to federal mandate, as compared with older children whose families’ income must be at or below 100 percent of the federal poverty level. As a consequence, 54 percent of children who are Medicaid-eligible but uninsured are less than 6 years old. Nevertheless, outreach through schools could reach some of these younger children, since 42 percent have a school-age sibling aged 6 to 17. This means that it could be possible to reach about 69 percent—or 2.4 million—of uninsured Medicaid-eligible children through schools. Other government programs could be used to reach families of uninsured Medicaid-eligible children, if they were using such programs. Use of government-subsidized services by these families might also indicate their willingness to access certain kinds of government-sponsored programs. While the CPS does not have information on use of public programs such as Head Start or the Department of Agriculture’s Special Supplemental Food Program for Women, Infants, and Children (WIC), it does have information on family use of the Food Stamp program. Compared with similar families with employment-based insurance or Medicaid coverage for their children, children who were Medicaid-eligible but uninsured were less likely to have been in families that received food stamps. Some experts have argued that immigrant families have been less willing to apply for subsidized benefits because of their fear of the government or language and cultural barriers. However, we did not find any significant difference in use of food stamps among uninsured Medicaid-eligible children with U.S.-born parents, foreign-born naturalized citizen parents, and foreign-born noncitizen parents. The demographic makeup of the uninsured child population varies geographically and by community, meaning that national analyses can only suggest potential outreach targets and must be validated in the light of local knowledge. Nonetheless, it is clear that the West and the South as a whole face particular challenges. A larger proportion of Medicaid-eligible children in these regions are uninsured—overall, the West and the South account for 73 percent of all uninsured Medicaid-eligible children nationwide. (See fig. 3.) Several reasons may explain the differences in proportions of uninsured Medicaid-eligible children in different regions. Some areas have higher proportions of workers with employment-based insurance because of the size of local firms, type of business, or degree of unionization. As a result, higher proportions of workers’ dependents are insured. Regions also differ in the number and percentage of immigrant families and various ethnic groups. All of these factors can affect insurance status and how states conduct outreach to the uninsured. Regions differ in the number of uninsured Medicaid-eligible children who are Hispanic or of Hispanic descent and in the number who are members of immigrant families. Both Hispanic and immigrant families are most prevalent in the West, particularly in California, where over 60 percent of uninsured Medicaid-eligible children are Hispanic and over 70 percent live in immigrant families. (See tables II.2 and II.3 in app. II.) Although some differences among states are due to states’ demographic characteristics, differences may also be due to states’ varying efforts to extend health insurance to those who have been unable to receive coverage and to inform these individuals of their eligibility. Having a larger, more visible program is a mechanism that may help. Some states have expanded Medicaid eligibility further for children than other states and may have attracted more of their poorer Medicaid-eligible children to enroll. When asked why families do not enroll their Medicaid-eligible children in the program, state officials, beneficiary advocates, health care providers, and other experts report a variety of contributing factors. Some families do not know about the program or do not perceive a need for its benefits. Some families, especially those who have never enrolled in public benefit programs, may not even be aware that they are eligible. In addition, some parents may associate Medicaid with welfare and dependency, and therefore have an aversion to enrolling their children in the program. Cultural and language differences may limit awareness or understanding, and immigration status may also affect a family’s willingness to apply. Finally, the eligibility process can be difficult for working families because of the limits on where and when enrollment can take place, the lengthy application, and the documentation required. These barriers to the enrollment process can be reduced, but in reducing the length of the application and the amount of documentation a balance must be struck between maximizing enrollment and minimizing program abuse. Most state officials, advocates, providers, and other experts whom we interviewed agreed that many families are unaware of Medicaid. Even families who know about the program may not realize that they could be eligible. With the long-standing link between Medicaid and AFDC, many families—both those who have never received welfare and those who have—assume that if they are not receiving cash assistance, they are not eligible for Medicaid. Two types of families tend to be unaware of their eligibility: working families who assume that Medicaid eligibility is tied to welfare eligibility and families who were previously on welfare and believe that, because of welfare reform, they are no longer eligible for Medicaid. These families are unlikely to understand that children with higher levels of family income may be eligible for Medicaid. Complex eligibility rules—which can result in younger children being eligible while their elder siblings are not—can simply add to families’ confusion. Several state officials, providers, and one expert told us that some families do not become concerned about health care access until their children become sick and, therefore, do not enroll them in Medicaid—especially if the children are relatively healthy. In addition, if families have successfully sought and received care for their children from clinics or emergency rooms in the past without enrolling in a health care program such as Medicaid, they are likely to continue to seek care from these providers. State officials, advocates, and other experts told us that some families are hesitant to enroll in Medicaid because of cultural differences, language barriers, and their understanding of U.S. immigration policies. Experts and a state official said that cultural differences may keep immigrant families from enrolling in Medicaid. Language was often mentioned as a barrier. Individuals who cannot read the Medicaid application and informational materials and cannot easily converse with eligibility workers by telephone or in person are at a distinct disadvantage. One expert told us that the degree of acculturation has a major impact on whether an immigrant will use public assistance of any kind. While time spent in the country is the main predictor of acculturation, some individuals may not participate in mainstream society and use its institutions even after living in the country for many years. According to some state officials, advocates, and experts, immigrant families may also hesitate to enroll in Medicaid because they are concerned that it will negatively affect their immigration status. Immigrants who are legal residents may be afraid that if they receive benefits that they will be labeled a “public charge” and will have difficulties with the Immigration and Naturalization Service (INS) when applying for naturalization, visa renewal, or reentry into the United States. Although advocates question whether aliens receiving benefits may be considered public charges, in some instances actions have been taken against such individuals seeking visa renewals. Several advocates also told us about cases where individuals were prevented from reentering the United States unless they agreed to reimburse Medicaid for services paid for by the program on their behalf—particularly in border states such as California. Publicity about such cases in the immigrant community can deter immigrants from applying for Medicaid benefits for themselves or their children—even in cases where the children were born in the United States and are American citizens. In families where one or more adults are in the country illegally, the reluctance to seek Medicaid benefits for a child may be even greater. When applying for Medicaid for children, families in some states are asked about the immigration status of other members of the household. Again, advocates told us this is a deterrent to enrollment for such families and reported that many immigrant families, both legal and illegal, seek medical assistance through county clinics and public hospitals because these institutions are viewed as more sympathetic and less likely to ask questions about immigration status. State officials and other experts told us that because of its long-standing ties with welfare and other benefit programs, many families associate Medicaid with a family that cannot provide for itself. Experts report that many working poor and near poor do not want to be labeled as welfare recipients, even if the law entitles their children to benefits. They often take the view that they never have received welfare and do not want to start. State officials, beneficiary advocates, providers, and other experts agree that Medicaid enrollment processes and requirements have often been barriers. However, to ensure that all recipients of Medicaid benefits meet income and other requirements, states have found it necessary to develop application processes that use lengthy application forms and require extensive documentation. State officials, beneficiary advocates, and other experts told us that lengthy enrollment forms and the associated documentation requirements create a barrier for families. Long forms are often used when a family is applying for a combination of programs, including Medicaid. Numerous questions relating to income, assets, citizenship, and family composition are used to determine eligibility and to ensure that only those who are entitled to benefits are enrolled in Medicaid. In addition to length, enrollment forms often require extensive documentation. Families are asked to provide paystubs, bank account statements, birth certificates, and other documents that verify the information they provide on the forms. Gathering such documents can be burdensome. For example, obtaining a birth certificate can involve going to a different office and then returning to the eligibility office. Obtaining certain documents can also require a family to pay a fee. A valid and reliable eligibility determination process is important to state officials to ensure program integrity. In addition, states can be assessed a financial penalty by the federal government if their error rates are too high. In an effort to balance these needs, most states have developed shorter forms for children who are applying exclusively for Medicaid, primarily by dropping the asset requirements. Some advocates, however, are still concerned with the length and complexity of application forms and the number of questions they contain. One advocate suggested that if applicants cannot understand the form, they are not going to fill it out. Another advocate pointed out that some questions may be well-intended, but they nonetheless lengthen the application. For example, as a way of identifying if the family may be eligible for other benefit programs, some states’ applications ask questions related to disability. In addition, advocates pointed out that the documentation requirements are so stringent in some states that many applicants are denied enrollment because they cannot produce the documentation required. In an earlier report, we found that such requirements were shown to account for nearly half of all denials. In addition to limits that were developed as part of a legitimate effort to maximize the accuracy of eligibility determinations and monitor the eligibility process, other barriers exist. These include location of enrollment sites and enrollment hours; fluctuations in eligibility status, including the impact of welfare reform; and families’ lack of transportation and communication problems. Many of the state officials and other experts with whom we spoke said that the enrollment process used for welfare was difficult for working families because enrollment locations are limited and open only during typical work hours. This makes it difficult for working parents in families whose children may be eligible for Medicaid to apply. Such parents may not have the flexibility in their job to take time off to enroll through face-to-face interviews, according to one state official and one expert. States are required to provide for the receipt and initial processing of applications for pregnant women, infants, and children at sites not used for AFDC applications—such as federally qualified health centers and hospitals that serve a larger share of uninsured and publicly insured persons—but these efforts may have been limited. Experts also noted that the eligibility system does not accommodate the fluctuating eligibility status of many families. Low-income working families may have changes in their income if they work seasonally or change or lose jobs. A family eligible one month may not be eligible the next month because of an increase in family income, but children in that family may still be covered under other categories of eligibility. According to experts, some states’ eligibility processes do not automatically make redeterminations to see if children who lose their eligibility might be eligible in another category. If the family does not reapply, the child loses coverage. Advocates have also been concerned that welfare reform may make enrollment less likely. Families may be confused about their Medicaid eligibility because, prior to welfare reform, Medicaid and cash assistance had, historically, been so closely linked. For example, if TANF enrollment workers focus on job search strategies and not on benefits, families who come in may not be enrolled for Medicaid. In addition, some families may believe Medicaid is time limited as is TANF. According to experts, advocates, and one provider, limits on the ability to communicate and availability of transportation can be a barrier for applicants. In addition to difficulties for non-English-speaking families, illiteracy may also limit a parent’s ability to enroll without substantial assistance. Experts also pointed out that lack of transportation to enrollment sites can be a barrier, primarily in rural areas, but also in some urban settings. A family may not have a car or have limited time and money to make a long trip to the welfare office. To enroll eligible children in Medicaid, some states are using innovative strategies that are intended to increase knowledge and awareness of the program and its benefits, minimize the perceived social stigma, and simplify and streamline the eligibility process. Education and outreach programs are often targeted to families who have children potentially eligible for Medicaid. Visible support from state leadership and partnerships with community groups are viewed by state officials and advocates as essential to obtaining the necessary resources to implement outreach programs. Some states have even renamed the Medicaid program as a way to change its image. To improve the enrollment process, some states have adopted strategies to assist immigrant families or have simplified and streamlined the eligibility process by shortening forms and accepting applications at many new sites, as well as mail-in applications. However, this kind of simplification and streamlining has required state officials to make difficult trade-offs between the need for program integrity and higher Medicaid enrollment. The states that we contacted have developed multifaceted outreach programs to educate families on the availability of the Medicaid program and the importance of enrolling their children. They generally agreed that a successful education and outreach program should target outreach to low-income working families with children, using nontraditional methods and locations, and work in collaboration with community groups, schools, providers, and advocates. These themes are broadly consistent with several findings from our demographic analysis: low-income working families with children have a high uninsured rate, and most uninsured Medicaid-eligible children are in school or have a sibling in school, which makes the schools an available avenue for reaching children and families. The states that we studied have employed a variety of methods to publicize Medicaid. For example, Massachusetts has placed outreach workers in health centers, hospitals, and other traditional locations; distributed literature in schools; sent material to the YMCA and other community groups; and worked with a supermarket chain to place in grocery bags notices of the program. The governor has held several press conferences around the state to publicize the program, and the state is working with workers in WIC clinics, who are already trained to do income-based eligibility assessments. The state has also used its enrollment data to target communities that have low levels of Medicaid enrollment and worked with local officials to address the problem. The state’s private contractor for managed care enrollment has also assisted with outreach through its presentations in the community. One advocacy group worked with the state to develop a campaign to target high-school athletes, who are required to have health insurance. This campaign involved sending posters and fold-out fliers—developed and produced with the donated time of professionals—to athletic directors in high schools throughout the state and establishing a pool of student athletes to go out and talk to their peers. In another initiative, the state medical society is training its members’ staffs to assist in educating families about program eligibility and benefits. Finally, Massachusetts is making $600,000 available to help community groups conduct outreach and educate families of uninsured Medicaid-eligible children, with the money distributed as grants in amounts between $10,000 and $20,000. In Arkansas, as part of a large media campaign that included television and radio announcements, the state placed color inserts in Sunday newspapers during September 1997. These inserts provided information on program eligibility and benefits, a toll-free number to obtain additional information, as well as a photograph of children with the governor endorsing the program. The state’s children’s hospital paid for the insert. Applications are available at schools, pharmacies, and churches, and brochures have also been placed in fast food bags. The state has also worked with its children’s hospital to place enrollment forms at affiliated clinics, which are located throughout the state. Georgia has made a major commitment to outreach by employing over 140 eligibility workers with the specific job of getting eligible children and families enrolled in Medicaid. These outreach workers are situated in numerous locations, including health departments, clinics, and hospitals. These workers also temporarily set up at nontraditional sites, such as schools, community agencies, and shopping malls. The outreach workers are often available during evening and weekend hours as a convenience to working families. Workers also make presentations regularly to community groups, medical providers, and employers. A flier was developed that is targeted to employers to inform them about benefit programs for which their employees may be eligible. Georgia is also trying to enroll former welfare recipients by emphasizing Medicaid enrollment as an important part of a successful transition to work. The state’s outreach program has also established partnerships with numerous community groups—including local coordinating councils, local teen pregnancy task forces, and school boards—and has used these local partnerships to develop outreach tailored to needs and characteristics of the communities. The state’s private contractor for enrollment in managed care has also assisted with the outreach program through its contacts with the community. In view of its recent welfare reform initiatives, Wisconsin is making a concerted effort to ensure that Medicaid-eligible individuals enroll in Medicaid regardless of their eligibility for the state welfare program. As part of this outreach effort, the state has begun to target county eligibility workers, individual providers, and Medicaid-eligible individuals to communicate that people may still qualify for medical assistance apart from their eligibility for welfare. Additional resources have been made available for outreach, outstationing, and training materials for staff. To plan its outreach efforts, the state is working with outside groups, including the Primary Health Care Association, the state medical society, Milwaukee County, Children’s Hospital, and Marshfield Clinic. We found less targeting of immigrant communities than might have been expected from the demographic analysis, although this was in some measure due to the characteristics of the states that we selected for our study. However, advocates report concern within the immigrant communities that receiving benefits will compromise their immigration status. One expert told us that some states have attempted to assist eligible immigrant families in enrolling their children by providing enrollment information and applications in alternative languages, particularly Spanish, and by hiring bilingual enrollment workers. In general, their outreach approach is similar to those tailored to other communities but with an emphasis on particular immigrant and ethnic cultures and languages. Massachusetts is working with local community groups that provide information and educate immigrants on the availability of Medicaid. Georgia’s outreach workers give presentations to employee groups within firms that have a large proportion of Hispanic immigrants among their workers. In their outreach efforts, states face challenges with the immigrant community because they have to take into account the recent changes of the Welfare Reform Act and the Balanced Budget Act, which make benefits a state option for qualified immigrants who arrived before August 23, 1996, and bar immigrants for 5 years if they arrived after August 22, 1996. However, these limitations do not affect the eligibility of native-born children in immigrant families. States have tried to change the perception that Medicaid is tied to welfare and dependency in a variety of ways. The most direct method for changing the program’s image is changing the program’s name. In addition, states have advertised the program as one that is intended for working families, while some have included policies to avoid displacing private health insurance. They have also adopted alternative enrollment methods so that individuals do not have to go to the local welfare office to enroll. Changing the Medicaid program’s name is not new, but it has become more widespread. Massachusetts recently renamed its program MassHealth with the intent that it would be more appealing to beneficiaries. MassHealth fliers describe several option plans available (six in total), referring to them by names such as “MassHealth Standard” and “MassHealth Basic”—names similar to commercial health plans. Arkansas named its Medicaid expansion program for children ARKids 1st. The logo for the program uses bright colors with the “1” in 1st represented by a crayon. Georgia has not changed the name of Medicaid, but its outreach project is called “Right From the Start” to project a positive message. Advertisements and fliers for these programs emphasize that they are for a broad population, not just those on welfare. MassHealth fliers state, “There is no reason why a child or a teen in Massachusetts should go without health care.” Massachusetts has fliers that outline income levels for eligibility that show families with almost $2,400 a month in income and pregnant women with income up to $3,300 a month as eligible. Georgia has a flier entitled, “Have you heard about benefits for working families?,” and the first program mentioned is Medicaid for children. Another flier targeted to families leaving welfare to work asks the question, “Did you know you could work full time and still receive some benefits?” (See fig. 4.) To minimize the possibility of displacing private insurance, known as “crowd out,” some states have policies to address the issue. The Medicaid program cannot refuse enrollment to any eligible individual based on the fact that he or she has insurance, although Medicaid is the payer of last resort. However, some states that have expanded eligibility through waivers of normal program rules have been allowed to limit eligibility if a family already has insurance. For example, in Arkansas, which received a waiver for its expansion, a child is not eligible for ARKids 1st unless he or she has been uninsured for a period of 12 months or the child lost insurance coverage during that period through no fault of the family. In Massachusetts (which also has a waiver for expansion) and Georgia, officials are cognizant of the potential dangers of crowd out. Massachusetts, as part of MassHealth, will subsidize the cost of insurance available to the family. Some states have developed a number of strategies to make the enrollment process easier for working families. Several states, as part of their outreach effort, have outstationed eligibility workers in sites that families frequent as an alternative to enrolling at the welfare office. In addition, states have simplified and shortened their enrollment applications, allowed applications by mail, dropped asset requirements, and reduced documentation requirements. To help ensure continued coverage of children in families whose income fluctuates, states can provide continuous eligibility. Of the states we contacted, only Arkansas has adopted continuous eligibility for a year for children. Some states have adopted enrollment methods that do not require individuals to visit a welfare office, in part to minimize Medicaid’s association with welfare and welfare families. If families are only seeking Medicaid enrollment for children, Massachusetts and Arkansas allow families to ask questions and request an application by telephone. These two states also accept applications by mail. Completing applications with outreach workers at various nontraditional sites is another way the process is made easier for working families and those without transportation. Each of the states with whom we spoke had shortened and simplified their enrollment form. Massachusetts officials used focus groups to find out why families did not enroll their children and how barriers to enrollment could be removed. Suggestions from the focus groups—such as adding more space on the enrollment form—helped the state design a simplified form that is easier to read. States have had the option of dropping the asset tests for certain populations. When Arkansas dropped its asset test for the ARKids 1st program, it also dropped the related questions about assets and property, shortening the enrollment form to four pages. Georgia also shortened its enrollment form and dropped the asset test. States are concerned with maintaining program integrity and ensuring that benefits go only to qualified individuals. However, 40 states have abolished the asset test for some or all children, primarily because the likelihood that these families have substantial assets is low. Table 5 shows the number of states that have made these changes. Few efforts have been made to address the problem of fluctuating family eligibility status, causing children to be inappropriately disenrolled from Medicaid. As part of its ARKids 1st program, Arkansas is providing 12 months of continuous eligibility to children regardless of changes in family income, under waiver authority granted by HCFA. Until recently, states had to receive a waiver to pursue such a policy. The Balanced Budget Act, however, allows states to adopt 12 months of continuous eligibility. To date, welfare reform has not significantly affected the application process for Medicaid. In a recent report, we found that nine states we contacted have chosen to make few structural changes in their Medicaid programs in the first full year of implementing welfare reform. For example, while the Welfare Reform Act delinked eligibility for cash assistance and Medicaid, the states that we contacted had generally decided not to separate Medicaid and cash assistance program administration. In three of the states that we spoke with for this study, welfare applicants received a combined form that permits families to apply for both cash assistance and Medicaid, but families applying only for Medicaid receive a shorter form with a subset of questions. Despite the importance of and large investment in providing health care to children in low-income families, difficulties in enrolling them in Medicaid leave more than 3 million children vulnerable. The states that we reviewed recognized that uninsured Medicaid-eligible children are generally in working two-parent families and have targeted their outreach accordingly. Targeting working families raises the issue of crowd out—replacing employer-based insurance with Medicaid—but states that we contacted have not seen this as a major problem given the low income levels of these families. Only Arkansas has taken direct action to discourage employers from dropping health insurance coverage by enforcing a 12-month waiting period. We found less outreach targeted to Hispanics and immigrants, and experts whom we interviewed said this was generally true, even in states with large immigrant or Hispanic populations. Immigrants, particularly families in which the parents are not naturalized U.S. citizens, are likely to be a more difficult group to reach, both because of the complexities of the law, which makes some but not all immigrant children eligible for Medicaid, and because of the immigrants’ general wariness of government. Some immigrant families include children who—because they were born in this country—are citizens and fully eligible for Medicaid. The states that we studied are, for the most part, using outreach and enrollment strategies available for some time—but not necessarily used for enrolling uninsured children. However, other strategies provided for by the Balanced Budget Act—such as continuous enrollment and presumptive eligibility—have not been widely implemented. CHIP also has considerable potential for identifying uninsured Medicaid-eligible children. The law provides that any child who applies for CHIP and is determined to be Medicaid-eligible should be enrolled in Medicaid. The more that states publicize CHIP, the greater the number of uninsured Medicaid-eligible children they are likely to identify and enroll in Medicaid—particularly if the states’ screening and enrollment process effectively identifies Medicaid-eligible children and enrolls them in the Medicaid program. We sought comments on a draft of this report from HCFA; from state officials in Arkansas, Georgia, Massachusetts, and Wisconsin; and from experts on children’s health insurance issues with the Southern Institute on Children and Families and the Center on Budget and Policy Priorities. A number of these officials provided technical or clarifying comments, which we incorporated as appropriate. In addition, HCFA noted that it had sent a letter dated January 23, 1998, to state officials to encourage them to simplify enrollment and expand outreach to the Medicaid-eligible population. As arranged with your office, unless you announce its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of HCFA, the directors of the state programs we spoke with; and interested congressional committees. Copies of the report will be made available to others upon request. If you or your staff have any questions about the information in this report, please call me or Phyllis Thorburn, Assistant Director, at (202) 512-7114. Other contributors to this report were Richard Jensen, Sheila Avruch, and Sarah Lamb. To examine the demographic characteristics of Medicaid-eligible uninsured children, we analyzed the Current Population Survey (CPS), which is used by some researchers to measure health insurance coverage in the United States. This technical appendix discusses the survey, how we measured insurance coverage and estimated Medicaid-eligible children, and how we determined parents’ work effort and immigration status. It also discusses some concerns about how well the CPS measures insurance coverage and compares our estimate of the number of Medicaid-eligible uninsured children with other analysts’ estimates. The CPS, a monthly survey conducted by the Bureau of the Census, is the source of official government statistics on employment and unemployment. Although the main purpose of the survey is to collect information on employment, an important secondary purpose is to collect information on the demographic status of the population, such as age, sex, race, marital status, educational attainment, and family structure. The March supplement of the CPS survey collects additional data on work experience, income, noncash benefits, and health insurance coverage of each household member at any time during the previous year. The CPS sample is based on the civilian, noninstitutionalized population of the United States. About 48,000 households with approximately 94,000 persons 15 years old and older and approximately 28,000 children aged 0 to 14 years old are interviewed monthly. The sample also includes about 450 armed forces members living in households that include civilians and are either on or off a military base. For the March supplement, an additional 2,500 Hispanic households are interviewed. The households sampled by the CPS are scientifically selected on the basis of area of residence to represent the United States as a whole, individual states, and other specified areas. Children can have multiple sources of health insurance coverage in the same year. The CPS asks about all sources of health insurance coverage. It is impossible to tell, for example, if a child is reported as having both Medicaid and employment-based insurance, whether the child had duplicate coverage, had Medicaid coverage first and then employment-based coverage, or vice versa. For this report, children who had employment-based insurance were reported as having such coverage even if they also had other sources of coverage. Likewise, children who had Medicaid coverage were reported as having such coverage even if they had other sources of coverage. As result, some children were reported as having both public and private coverage—usually Medicaid and employment-based insurance—for the same year. (See fig. 1.) For this report, children who are uninsured are children for whom no source of coverage during the entire previous year is reported. CPS asks specific questions about whether any members of the household have coverage provided through an employer or union; purchased directly; or have Medicare, Medicaid, or other public coverage. However, it does not directly ask whether an individual is uninsured if no source of coverage is reported. We defined Medicaid-eligible children in 1996 as children eligible by federal mandate based on age and poverty criteria—children from birth through 5 years old with family income at or below 133 percent of the federal poverty level and children 6 through 12 years old with family income at or below the poverty level. We used income in the immediate family rather than the household income to calculate poverty levels. We did this because states have specific rules on what income can be deemed available to the child to determine Medicaid eligibility, and it may not include income provided to the household by people not related to the child. In addition, employment-based health insurance is usually only available to immediate dependents; therefore, the income and work effort within the nuclear family is more relevant to whether or not the child is insured. We matched children’s records with parents’ records to analyze family characteristics. CPS considers a family to be two or more persons residing together and related by birth, marriage, or adoption. The Census Bureau develops family records for the householder (a person in whose name the housing unit is owned, leased, or rented or, if no such person, an adult in the household); other relatives of the householder with their own subfamilies; and unrelated subfamilies. If the house is owned, leased, or rented jointly by a married couple, the householder may be either the husband or wife. We paired children’s records to their parents’ records or, lacking a parent, another adult relative (aged 18 through 64) in their immediate family whom we called a parent. After this pairing, we matched the adult family member’s record to his or her spouse’s record, if any, to get “parents” in our file. We were not able to match all children’s records with records of parents or other relatives in their households. For Medicaid-eligible children, we matched 96 percent of the children’s records. For Medicaid-eligible uninsured children, we matched 92 percent of the children’s records. Some of our tables and figures are based on the entire file of children’s records; others are based on the matched file and are so indicated. Matching parents with children to analyze the association of workforce participation and insurance for children helped us develop a more accurate picture of uninsured and Medicaid-insured children with working parents. We analyzed parent work status on the basis of information about the parent who worked the most. (See table I.1.) This allowed us to more accurately portray the work status of parents in two-parent families. Where two parents were working in the same status—such as full-time—we matched to the first parent in that work status. Either parent worked full-time, full year. Neither parent worked full-time, full year, but at least one worked full-time part of the year. Neither parent worked full-time, but at least one parent worked part-time for the entire year. Neither parent worked either full-time or full year, but at least one parent worked part-time for part of the year. Neither parent worked at all during the entire year. We used the parent with the greater workforce participation to determine children’s birth and immigration status relative to their parents’. This could lead to a slight underestimate of children in immigrant families, since in some two-parent families, spouses do not have the same birth or citizenship status. However, spouses generally share similar birth and citizenship status. We examined birth and citizenship status of one parent compared with the other in two-parent families and found that over 90 percent had the same birth and citizenship status as their spouse. Since only about half of Medicaid-eligible children live in two-parent families to begin with, matching to one parent would lead to over 95 percent of children being accurately categorized, based on a match with one parent. Some researchers who work with survey data to assess health insurance status of the U.S. population are concerned that the currently used surveys, including CPS, may not accurately reflect health insurance coverage in the United States. CPS and the Survey of Income and Program Participation (SIPP)—another survey that is often used to assess health insurance coverage—report lower Medicaid coverage than HCFA data on Medicaid enrollment. Comparing CPS and SIPP data for similar periods of time, some researchers have concluded that although the CPS asks about insurance coverage for the entire previous year, respondents are reporting coverage based on a shorter time frame—perhaps 4 to 6 months. Researchers at the Urban Institute have concluded that some of the uninsured actually have coverage, probably Medicaid coverage, and adjust their estimates of the uninsured accordingly. Although health researchers are concerned that the CPS may not be ideal for analyzing health insurance coverage, neither is any other currently available survey. Therefore, many researchers continue to use it. GAO chose to use CPS data for its analysis of children’s health insurance coverage for several reasons. The CPS can be used to look at trends over time, although care must be taken when making comparisons between years because of questionnaire and methodological changes. It has a large sample, which gives estimates from the data more statistical power. It was designed so that it can be used for some state-level estimates. Information from new health insurance surveys is or is becoming available. The National Health Interview Survey periodically asks questions about health insurance coverage, and the Agency for Health Care Policy and Research has released preliminary 1996 estimates of health insurance coverage from the Medical Expenditure Panel Survey. The Center for Studying Health System Change has surveyed health insurance coverage in 1996 and 1997 in its Community Tracking Study (CTS) and is beginning to publish its data. The Urban Institute has also developed and fielded its own health insurance survey. Comparisons of these surveys with the CPS and SIPP may help researchers more definitively agree on the number of uninsured Americans and trends in insurance over time. Using either CPS or new CTS data, five different groups of researchers compared estimates of uninsured Medicaid-eligible children. (See table I.2.) While the number of Medicaid-eligible children and definition of Medicaid eligibility used by the researchers differed, all came up with a similar conclusion—many uninsured children are eligible for Medicaid. The researchers’ estimates ranged from 24 to 45 percent. Number (in millions) Since CPS estimates come from a sample, they may differ from figures from a complete census using the same questionnaires, instructions, and enumerators. A sample survey estimate has two possible types of errors: sampling and nonsampling. Each of the studies mentioned above—using either CPS or other sampling surveys—has the same possible errors. The accuracy of an estimate depends on both types of error, but the full extent of the nonsampling error is unknown. Several sources of nonsampling errors include the following: inability to get information about all sample cases; definitional difficulties; differences in interpretation of questions; respondents’ inability or unwillingness to provide correct information; respondents’ inability to recall information; errors made in data collection, such as recording and coding data; errors made in processing data; errors made in estimating values for missing data; and failure to represent all units with the sample (undercoverage). Tables II.1 through II.3 provide a demographic profile of Medicaid-eligible children in 1996. Table II.1: Number and Percentage of Medicaid-Eligible Children Who Were Insured by Medicaid or Uninsured in 1996, by Race and Ethnicity Number (in thousands) Percentage enrolled Number (in thousands) Medicaid: Early Implications of Welfare Reform for Beneficiaries and States (GAO/HEHS-98-62, Feb. 24, 1998). Health Insurance: Coverage Leads to Increased Health Care Access for Children (GAO/HEHS-98-14, Nov. 24, 1997). Uninsured Children and Immigration, 1995 (GAO/HEHS-97-126R, May 27, 1997). Health Insurance for Children: Declines in Employment-Based Coverage Leave Millions Uninsured; State and Private Programs Offer New Approaches (GAO/T-HEHS-97-105, Apr. 8, 1997). Employment-Based Health Insurance: Costs Increase and Family Coverage Decreases (GAO/HEHS-97-35, Feb. 24, 1997). Children’s Health Insurance, 1995 (GAO/HEHS-97-68R, Feb. 19, 1997). Children’s Health Insurance Programs, 1996 (GAO/HEHS-97-40R, Dec. 3, 1996). Private Health Insurance: Millions Relying on Individual Market Face Cost and Coverage Trade-Offs (GAO/HEHS-97-8, Nov. 25, 1996). Medicaid and Uninsured Children, 1994 (GAO/HEHS-96-174R, July 9, 1996). Health Insurance for Children: Private Insurance Coverage Continues to Deteriorate (GAO/HEHS-96-129, June 17, 1996). Health Insurance for Children: State and Private Programs Create New Strategies to Insure Children (GAO/HEHS-96-35, Jan. 18, 1996). Health Insurance for Children: Many Remain Uninsured Despite Medicaid Expansion (GAO/HEHS-95-175, July 19, 1995). Medicaid: Spending Pressures Drive States Toward Program Reinvention (GAO/HEHS-95-122, Apr. 4, 1995). Medicaid: Restructuring Approaches Leave Many Questions (GAO/HEHS-95-103, Apr. 4, 1995). Health Care Reform: Potential Difficulties in Determining Eligibility for Low-Income People (GAO/HEHS-94-176, July 11, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reported on children who are eligible for Medicaid but are not enrolled, focusing on: (1) the demographic and socioeconomic characteristics of children who qualify for Medicaid, and identifying groups in which uninsured children are concentrated and to whom outreach efforts might be expected; (2) the reasons these children are not enrolled in Medicaid; and (3) strategies that states and communities are using to increase employment. GAO noted that: (1) the demographic and socioeconomic characteristics of uninsured Medicaid-eligible children suggest that outreach strategies could be targeted to specific groups; (2) in 1996, 3.4 million Medicaid-eligible children--23 percent of those eligible under the federal mandate--were uninsured; (3) the majority were children of working poor or near poor, and their parents were often employed by small firms and were themselves uninsured; (4) uninsured children who are eligible for Medicaid are more likely to be in working families, Hispanic, and either U.S.-born to foreign-born parents or foreign born; (5) state officials, beneficiary advocates, and health care providers whom GAO contacted cited several reasons that families do not enroll their children in Medicaid; (6) lower income working families may not realize that their children qualify for Medicaid, or they may think their children do not need coverage if they are not currently sick; (7) under welfare reform, the delinking of Medicaid and cash assistance may cause some confusion for families, although GAO found that states were making efforts to retain a single application and eligibility determination process to avoid this problem; (8) in addition, many low-income families believe that Medicaid carries the same negative image of dependency that they attach to welfare; (9) immigrant families, many of whom are Hispanic, face additional barriers, including language and cultural separateness, fear of dealing with the government, and changing eligibility rules; (10) the enrollment process for Medicaid can involve long forms and extensive documentation, which are intended to ensure program integrity but often are a major deterrent to enrollment; (11) recognizing these impediments, some states have undertaken education and outreach initiatives and have tried to change the image of the program and simplify enrollment to acquire only necessary information; (12) these efforts include mass media campaigns and coordination of effort with community organizations and provider groups; (13) some states have made the enrollment process more accessible for working families, using mail-in applications or enrollment at sites chosen for their convenience; (14) several states have changed the name of the program to minimize its identification with welfare and other assistance programs; (15) many states provide Spanish-language applications and some are working with community groups; and (16) some states have also simplified the enrollment procedure by shortening the enrollment form and reducing the documentation requirements.
The federal government receives amounts from numerous sources in addition to tax revenues, including user fees, fines, penalties, and intragovernmental fees. Whether these collections are dedicated to a particular purpose and available for agency use without further appropriation depends on the type of collection and its specific authority. User fees: User fees are fees assessed to users for goods or services provided by the federal government. They are an approach to financing federal programs or activities that, in general, are related to some voluntary transaction or request for government services above and beyond what is normally available to the public. User fees are a broad category of collections, whose boundaries are not clearly defined. They encompass charges for goods and services provided to the public, such as fees to enter a national park, as well as regulatory user fees, such as fees charged by the Food and Drug Administration for prescription drug applications. Unless Congress has provided specific statutory authority for an agency to use (i.e., obligate and spend) fee collections, fees are deposited to the Treasury as miscellaneous receipts and are generally not available to the agency. Fines, penalties, and settlement proceeds: Criminal fines and penalty payments are imposed by courts as punishment for criminal violations. Civil monetary penalties are not a result of criminal proceedings but are employed by courts and federal agencies to enforce federal laws and regulations. Settlement proceeds result from an agreement ending a dispute or lawsuit. As with user fees, unless Congress has provided specific statutory authority for an agency to use fines, penalties, and settlements, those collections are deposited as miscellaneous receipts and are generally not available to the agency. Intragovernmental fees are charged by one federal agency to another for goods and services such as renting space in a building or cybersecurity services. Unlike user fees, fines, and penalties, unless Congress has specified otherwise, agencies generally have authority to use intragovernmental fees without further appropriation. In 2013, we identified six key fee design decisions related to how fees are set, used, and reviewed that, in the aggregate, enable Congress to design fees that strike its desired balance between agency flexibility and congressional control. Four of the six key design decisions relate to how the fee collections are used and in 2015 we reported that they are applicable to fines and penalties (see figure 1). Congress determines the availability of collections by defining the extent to which an agency may use (i.e., obligate and spend) them, including the availability of the funds, the period of time the collections are available for obligation, the purposes for which they may be used, and the amount of collections that are available to the agency. Availability. Congressional decisions about the use of a fee, fine, or penalty will determine how the funds will be considered within the context of all federal budgetary resources. Collections are classified into 3 major categories: offsetting collections, offsetting receipts, or governmental receipts. Funds classified as offsetting collections can provide agencies with more flexibility because they are generally available for agency obligation without further legislative action. In contrast, offsetting receipts and governmental receipts offer greater congressional control because, generally, additional congressional action is needed before the collections are available for agency obligation. Time. When Congress provides that an agency’s collections are available until they are expended, agencies have greater flexibility and can carry over unobligated amounts to future fiscal years. This enables agencies to align collections and costs over a longer time period and to better prepare for, and adjust to, fluctuations in collections and costs. Funds set aside or reserved can sustain operations in the event of a sharp downturn in collections or increase in costs. Carrying over unobligated balances from year to year, if an agency has multi- or no-year collections, is one way agencies can establish a reserve. Purpose. Congress sets limits on the activities or purposes for which an agency may use collections. Congress has granted some agencies broad authority to use some of their collections for any program purpose, but has limited the use of other collections to specific sets of activities. Narrower restrictions may benefit stakeholders and increase congressional control. On the other hand, statutes that too narrowly limit how collections can be used reduce both Congress’s flexibility to make resource decisions and an agency’s flexibility to reallocate resources. This can make it more difficult to pursue public policy goals or respond to changing program needs, such as when the activities intended to achieve the purposes of the related program change. Amount. Congress determines the specific level of budget authority provided for a program’s activities by limiting the amount of collections that can be collected or used by the agency; however, these limits can also pose challenges for the agency. For example, when a fee-funded agency is not authorized to retain or use all of its fee collections and no other funding sources are provided, the agency may not have the funds available to produce the goods or services that it has promised or that it is required to provide by law. Our design guides can help Congress consider the implications and tradeoffs of various design alternatives. One key design element is whether the funds will be (1) deposited to the Treasury as miscellaneous receipts for general support of federal government activities, (2) dedicated to the related program with availability subject to further appropriation, (3) dedicated to the related program and available without further congressional action, or (4) available based on a combination of these authorities. Some authorities to collect fees, fines and penalties specify that the funds will be deposited to the Treasury as miscellaneous receipts. These funds are not dedicated to the agency or program under which they were collected; they are used for the general support of federal government activities. For example, Penalties from financial institutions: Civil monetary penalty payments collected from financial institutions by certain financial regulators, including the Office of the Comptroller of the Currency and the Federal Deposit Insurance Corporation, are deposited to the Treasury as miscellaneous receipts. In March 2016, we reported that, from January 2009 through December 2015, financial regulators and components within the Department of the Treasury deposited $2.7 billion to the Treasury as miscellaneous receipts from enforcement actions assessed against financial institutions for violations related to anti-money laundering, anti-corruption, and U.S. sanctions programs requirements. Federal Communications Commission (FCC) Application Fees: The FCC regulates interstate and international communications by radio, television, wire, satellite, and cable, and telecommunications services for all people of the United States. FCC collects application fees from companies for activities such as license applications, renewals, or requests for modification. As we reported in September 2013, these fees are deposited to the Treasury as miscellaneous receipts. Some fees, fines, and penalties cannot be used by an agency without being further appropriated to the agency. For example, Customs and Border Protection’s (CBP) Merchandise Processing Fee: Importers of cargo pay a fee to offset the costs of “customs revenue functions” as defined in statute, and the automation of customs systems. CBP deposits merchandise processing fees as offsetting receipts to the Customs User Fee Account, with availability subject to appropriation. In July 2016, we reported that in fiscal year 2014 merchandise processing fee collections totaled approximately $2.3 billion. Requiring an appropriation to make the funds available to an agency increases opportunities for congressional oversight on a regular basis. When the amount of collections exceeds the amount of the appropriation, however, unobligated collection balances that are not available to the agency may accumulate. For example, Security and Exchange Commission (SEC) Fees: When SEC collects more in Section 31 fees than its annual appropriation, the excess collections are not available for obligation without additional congressional action. In September 2015, we reported that at the end of fiscal year 2014, the SEC had a $6.6 billion unavailable balance in its Salaries and Expenses account because the fee collections exceeded appropriations. Environmental Protection Agency (EPA) Motor Vehicle and Engine Compliance Program (MVECP) Fees: MVECP fee collections are deposited into EPA’s Environmental Services Special Fund. As we reported in September 2015, according to officials, Congress had not appropriated money to EPA from this fund for MVECP purposes. EPA instead received annual appropriations which may be used for MVECP purposes. As a result, the unavailable balance of this fund steadily increased and totaled about $370 million at the end of fiscal year 2014. U.S. Army Corps of Engineers Harbor Maintenance Fee: The authorizing legislation generally designates that the purpose for the fee collections is harbor maintenance activities but, as we reported in February 2008, fee collections have substantially exceeded spending on harbor maintenance. In July 2016, we reported that the Harbor Maintenance Trust Fund had a balance of over $8 billion at the end of fiscal year 2014. U.S. Patent and Trademark Office (USPTO) Fees: In September 2013, we reported that in some years Congress chose not to make available to USPTO the full amount of its collections which, according to USPTO officials, contributed to USPTO’s inability to hire sufficient examiners to keep up with USPTO’s workload and invest in technology systems needed to modernize the USPTO. According to USPTO officials, patent fee collections can only be used for patent processes, and trademark fee collections can only be used for trademark processes, as well as to cover each processes’ proportionate share of the administrative costs of the agency. USPTO officials stated that patent and trademark customers are typically two distinct groups and this division helps to assure stakeholders that their fees are supporting the activities that affect them directly. Some programs include mechanisms to link the amount of collections with the amount of collections appropriated to the program, over time. For example, Food and Drug Administration (FDA) Prescription Drug User Fees: If FDA prescription drug user fee collections are higher than the amount of the collections appropriated for the fiscal year, FDA must adjust fee rates in a subsequent year to reduce its anticipated fee collections by the excess amount. In March 2012, we reported that in fiscal year 2010, Prescription Drug User Fee Act user fees collected by FDA— including application, establishment, and product fees—totaled more than $529 million, including over $172 million in application fees. Legislation authorizing a fee, fine, or penalty may give the agency authority to use collections without additional congressional action. We refer to the legal authorities that provide agencies with permanent authority to both collect and obligate funds from sources such as fees, fines, and penalties as “permanent funding authorities.” Agencies with these permanent funding authorities have varying degrees of autonomy, depending in part on the extent to which the statute limits when, how much, and for what purpose funds may be obligated. Some examples include the following: National Park Service (NPS) Fees: NPS fees include recreation fees—primarily entrance and amenity fees—and commercial service fees paid by private companies that provide services, such as operating lodges and retail stores in park units. In December 2015, we reported that in fiscal year 2014 the NPS collected about $186 million in recreation fees and about $95 million in commercial service fees. U.S. Department of Agriculture Animal and Plant Health Inspection Service (APHIS) Agricultural Quarantine Inspection (AQI) Fees: The AQI program provides for inspections of imported agricultural goods, products, passenger baggage, and vehicles to prevent the introduction of harmful agricultural pests and diseases. APHIS is authorized to set and collect user fees sufficient to cover the cost of providing and administering AQI services in connection with the arrival of commercial vessels, trucks, railcars, and aircraft, and international passengers. AQI fee collections are available without fiscal year limitation and may be used for any AQI-related purpose without further appropriation. In March 2013, we reported that in fiscal year 2012, AQI fee collections totaled about $548 million. Environmental Protection Agency (EPA) Superfund Settlements: Under the Superfund program, EPA has the authority to clean up hazardous waste sites and then seek reimbursement from potentially responsible parties. EPA is authorized to retain and use funds received from certain types of settlements with these parties in interest-earning, site-specific special accounts within the Hazardous Substance Superfund Trust Fund. EPA generally uses these funds for future cleanup actions at the sites associated with a specific settlement or to reimburse appropriated funds that EPA had previously used for response activities at these sites. In January 2012, we reported that as of October 2010 EPA held nearly $1.8 billion in unobligated funds in 947 open special accounts for 769 Superfund sites. Tennessee Valley Authority Collections (TVA): The TVA, the nation’s largest public power provider, has authority to use payments it receives from selling power to the public without further appropriation. In October 2011, we reported that TVA had annual revenues of about $11 billion. Presidio Trust Collections: The Presidio Trust, a congressionally chartered organization, manages The Presidio, an urban park in San Francisco, and sustains its operations in part by rental income from residential and commercial buildings on its grounds. Agencies can also be authorized to retain intragovernmental fees charged to other agencies in exchange for a good or service. Some agencies are fully supported by intragovernmental fees; for others, intragovernmental fees are one of their sources of funds. Federal Protective Service (FPS) Fees: The FPS is a fully fee-funded organization authorized to charge customer agencies fees for security services at federal facilities and to use those offsetting collections for all agency operations. In July 2016, we reported that, at the end of fiscal year 2014, FPS had an unobligated balance of approximately $193 million and that FPS had not established targets to determine the extent to which that balance was appropriate to fund its operations. Federal Aviation Administration (FAA) Franchise Fund Customer Fees: FAA’s Administrative Services Franchise Fund provides goods and services—including training and specialized aircraft maintenance—to customer agencies on a fee-for-service basis. National Park Service Fees (NPS): NPS collections include intragovernmental fees, as well as user fees and appropriations. For example, in October 2016, we reported that NPS received funding from the Department of the Army to contract with the National Symphony Orchestra for holiday concerts on the U.S. Capitol Grounds. Even when an agency has a permanent authority to use collections, collections remain subject to congressional oversight at any point in time and Congress can place limitations on obligations for any given year. For example, U.S. Citizenship and Immigration Services (USCIS) Fees: USCIS is authorized to charge fees for adjudication and naturalization services, including a premium-processing fee for employment-based petitioners and applicants. The House Report to the fiscal year 2008 Department of Homeland Security Appropriations Bill, H.R. 2638, directed USCIS to allocate all premium-processing fee collections to information technology and business-systems transformation. In January 2009, we reported that, consistent with this directive, USCIS’s 2007 fee review stated that the agency intended to use all premium processing collections to fund infrastructure improvements to transform USCIS’s paper-based data systems into a modern, digital processing resource. In July 2016, we reported that USCIS estimated that the unobligated carryover balance for the premium processing fee could grow to $1.1 billion by fiscal year 2020, as fee collections are expected to exceed Transformation initiative funding requirements in fiscal years 2015 through 2020. Department of Justice’s (DOJ) Crime Victims Fund (CVF) Fines and Penalties: Criminal fines and penalties collected from offenders, among other sources, are deposited in the CVF and can be used without further appropriation to fund victims’ assistance programs and directly compensate crime victims. In February 2015, we reported that in fiscal years 2009 through 2013, annual appropriations acts limited the CVF amounts the DOJ’s Office of Justice Programs may obligate for these purposes. In some cases, Congress has provided agencies with permanent authority to use a portion of collections and designated other portions of the collections for another use or to be deposited to the Treasury as miscellaneous receipts. Bureau of Land Management (BLM) Grazing Fees: Since the early 1900s, the federal government has required ranchers to pay a fee for grazing their livestock on millions of acres of federal land located primarily in western states. The relevant authorities designate a portion of the grazing fees collected by the BLM for range improvement, a portion to states, and a portion to be deposited to the Treasury as miscellaneous receipts. For example, in September 2005, we reported that in fiscal year 2004 the BLM collected about $11.8 million in grazing fees, half of which was deposited to a special fund receipt account in the Treasury for range rehabilitation, protection, and improvements. Of the other half of the collections, about $2.2 million was distributed to states and counties and about $3.7 million was deposited to the Treasury as miscellaneous receipts. Department of Housing and Urban Development (HUD) Mutual Mortgage Insurance Fund Settlement: HUD’s Mutual Mortgage Insurance Fund receives payments resulting from violations related to single-family programs. The primary purpose of the Mutual Mortgage Insurance Fund is to pay lenders in cases where borrowers default on their loan and the lender makes a claim for mortgage insurance benefits. In November 2016, we reported on a case involving False Claims Act violations and loans backed by HUD’s Federal Housing Administration (FHA) in which a portion of the settlement was paid to the company that filed a complaint in regard to the False Claims Act on behalf of the government. The other FHA-related settlement proceeds were divided among, and deposited to, the Mutual Mortgage Insurance Fund, the Treasury as miscellaneous receipts, and DOJ’s Three Percent Fund. DOJ Drug Enforcement Administration (DEA) Diversion Control Fees: The first $15 million of fees collected each year from DEA registrants such as manufacturers, distributors, dispensers, importers, and exporters of controlled substances (such as narcotics and stimulants) and certain listed chemicals (such as ephedrine) is deposited to the Treasury as miscellaneous receipts. As we reported in February 2015, fees collected beyond $15 million are available to the agency and obligated to recover the full costs of DEA’s diversion control program. DOJ Three Percent Fund Penalties: Most civil penalties resulting from DOJ litigation are eligible to be assessed up to a 3 percent fee disbursed to DOJ’s Three Percent Fund—which is primarily used to offset DOJ expenses related to civil debt collection. The remainder of the civil penalty amount collected may be deposited to the Treasury as miscellaneous receipts or to another account. For example, in February 2015, we reported on a civil settlement involving fraud against the U.S. Postal Service. Of the $13 million that was awarded to the U.S. Postal Service, DOJ deposited $390,000 into the Three Percent Fund. Chairmen Meadows and Jordan, Ranking Members Connolly and Cartwright, and Members of the Subcommittees, this concludes our prepared statement. We would be pleased to respond to any questions you may have at this time. If you or your staff members have any questions about this testimony, please contact Heather Krause, Acting Director, Strategic Issues at (202) 512-6806 or krauseh@gao.gov or Edda Emmanuelli Perez, Managing Associate General Counsel, Office of General Counsel at (202) 512-2853 or EmmanuelliPerezE@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. GAO staff who made key contributions to this testimony are Susan J. Irving, Director; Julia Matta, Assistant General Counsel for Appropriations Law; Susan E. Murphy, Assistant Director; Laurel Plume, Analyst-in-Charge; and Amanda Postiglione, Senior Attorney. Allison Abrams, Dawn Bidne, Elizabeth Erdmann, Chris Falcone, Valerie Kasindi, and Jeremy Manion also contributed. Principles of Federal Appropriations Law, Chapter 2, The Legal Framework, Fourth Edition, 2016 Revision. GAO-16-464SP. Washington, D.C.: March 10, 2016. Federal User Fees: Key Considerations for Designing and Implementing Regulatory Fees. GAO-15-718. Washington, D.C.: September 16, 2015. Federal User Fees: Fee Design Options and Implications for Managing Revenue Instability. GAO-13-820. Washington, D.C.: September 30, 2013. Congressionally Chartered Organizations: Key Principles for Leveraging Nonfederal Resources. GAO-13-549. Washington, D.C.: June 7, 2013. Federal User Fees: A Design Guide. GAO-08-386SP. Washington, D.C.: May 29, 2008. Principles of Federal Appropriations Law, Third Edition, Volume II. GAO-06-382SP. Washington, D.C.: February 1, 2006. A Glossary of Terms Used in the Federal Budget Process. GAO-05-734SP. Washington, D.C.: September 1, 2005. Federal Trust and Other Earmarked Funds: Answers to Frequently Asked Questions. GAO-01-199SP. Washington, D.C.: January 1, 2001. Budget Issues: Inventory of Accounts With Spending Authority and Permanent Appropriations, 1997. OGC-98-23. Washington, D.C.: January 19, 1998. Budget Issues: Inventory of Accounts With Spending Authority and Permanent Appropriations, 1996. AIMD-96-79. Washington, D.C.: May 31, 1996. Financial Institutions: Penalty and Settlement Payments for Mortgage- Related Violations in Selected Cases. GAO-17-11R. Washington, D.C.: November 10, 2016. U.S. Capitol Grounds Concerts: Improvements Needed in Management Approval Controls over Certain Payments. GAO-17-44. Washington, D.C.: October 25, 2016. DHS Management: Enhanced Oversight Could Better Ensure Programs Receiving Fees and Other Collections Use Funds Efficiently. GAO-16-443. Washington, D.C.: July 21, 2016. Revolving Funds: Additional Pricing and Performance Information for FAA and Treasury Funds Could Enhance Agency Decisions on Shared Services. GAO-16-477. Washington, D.C.: May 10, 2016. Financial Institutions: Fines, Penalties, and Forfeitures for Violations of Financial Crimes and Sanctions Requirements. GAO-16-297. Washington, D.C.: March 22, 2016. National Park Service: Revenues from Fees and Donations Increased, but Some Enhancements Are Needed to Continue This Trend. GAO-16-166. Washington, D.C.: December 15, 2015. Department of Justice: Alternative Sources of Funding Are a Key Source of Budgetary Resources and Could Be Better Managed. GAO-15-48. Washington, D.C.: February 19, 2015. Agricultural Quarantine Inspection Fees: Major Changes Needed to Align Fee Revenues with Program Costs. GAO-13-268. Washington, D.C.: March 1, 2013. Patent and Trademark Office: New User Fee Design Presents Opportunities to Build on Transparency and Communication Success. GAO-12-514R. Washington, D.C.: April 25, 2012. Prescription Drugs: FDA Has Met Most Performance Goals for Reviewing Applications. GAO-12-500. Washington, D.C.: March 30, 2012. Superfund: Status of EPA’s Efforts to Improve Its Management and Oversight of Special Accounts. GAO-12-109. Washington, D.C.: January 18, 2012. Tennessee Valley Authority: Full Consideration of Energy Efficiency and Better Capital Expenditures Planning Are Needed. GAO-12-107. Washington, D.C.: October 31, 2011. Budget Issues: Better Fee Design Would Improve Federal Protective Service’s and Federal Agencies’ Planning and Budgeting for Security. GAO-11-492. Washington, D.C.: May 20, 2011. Federal User Fees: Additional Analyses and Timely Reviews Could Improve Immigration and Naturalization User Fee Design and USCIS Operations. GAO-09-180. Washington, D.C.: January 23, 2009. Federal User Fees: Substantive Reviews Needed to Align Port-Related Fees with the Programs They Support. GAO-08-321. Washington, D.C.: February 22, 2008. Livestock Grazing: Federal Expenditures and Receipts Vary, Depending on the Agency and the Purpose of the Fee Charged. GAO-05-869. Washington, D.C.: September 30, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Congress exercises its constitutional power of the purse by appropriating funds and prescribing conditions governing their use. Through annual appropriations and other laws that constitute permanent appropriations, Congress provides agencies with authority to incur obligations for specified purposes. The federal government receives funds from a variety of sources, including tax revenues, fees, fines, penalties, and settlements. Collections from fees, fines, penalties, and settlements involve billions of dollars and fund a wide variety of programs. The design and structure—and corresponding agency flexibility and congressional control—of these statutory authorities can vary widely. In many cases, Congress has provided agencies with permanent authority to collect and obligate funds from fees, fines, and penalties without further congressional action. This authority is a form of appropriations and is subject to the fiscal laws governing appropriated funds. In addition, annual appropriation acts may limit the availability of those funds for obligation. Given the nation's fiscal condition, it is critical that every funding source and spending decision be carefully considered and applied to its best use. This testimony provides an overview of key design decisions related to the use of federal collections outlined in prior GAO reports, with examples of specific fees, fines, and penalties from GAO reports issued between September 2005 and November 2016. GAO's prior work has identified four key design decisions related to how fee, fine, and penalty collections are used that help Congress balance agency flexibility and congressional control. One of these key design decisions is the congressional action that triggers the use of collections. The table below outlines the range of structures that establish an agency's use of collections and examples of fees, fines, and penalties for each structure. Source: GAO analysis of applicable laws ǀ G AO-17-268T As GAO has previously reported, these designs involve different tradeoffs and implications. For example, requiring collections to be annually appropriated before an agency can use the collections increases opportunities for congressional oversight on a regular basis. Conversely, if Congress grants an agency authority to use collections without further congressional action, the agency may be able to respond more quickly to customers or changing conditions. Even when an agency has the permanent authority to use collections, the funds remain subject to congressional oversight at any point in time and Congress can place limitations on obligations for any given year.
Emerging infectious diseases pose a growing health threat to people everywhere. Some emerging infections result from deforestation, increased development, and other environmental changes that bring people into contact with animals or insects that harbor diseases only rarely encountered before. However, others are familiar diseases that have developed resistance to the antibiotics that brought them under control just a generation ago. Infectious diseases account for considerable health care costs and lost productivity. In this country, about one-fourth of all doctor visits involve infectious diseases. The number of pathogens resistant to one or more previously effective antibiotics is increasing rapidly, reducing treatment options and adding to health care costs. Surveillance is public health officials’ most important tool for detecting and monitoring both existing and emerging infections. Without adequate surveillance, local, state, and federal officials cannot know the true scope of existing health problems and may not recognize new diseases until many people have been affected. Health officials also use surveillance data to allocate their staff and dollar resources and to monitor and evaluate the effectiveness of prevention and control programs. The states have principal responsibility for protecting the public’s health and, therefore, take the lead role in surveillance efforts. Each state decides for itself which diseases physicians, hospitals, and others should report to its health department and which information it will then pass on to CDC. Most state surveillance programs include infections from the list of “nationally notifiable” diseases, which the Council of State and Territorial Epidemiologists (CSTE), in consultation with CDC, reviews annually. Nationally notifiable diseases are ones that are important enough for the nation as a whole to routinely report to CDC. However, states are under no obligation to include nationally notifiable diseases in their own surveillance programs, and state reporting to CDC is voluntary. The methods for detecting emerging infections are the same as those used to monitor infectious diseases generally. These methods can be characterized as passive or active. Passive surveillance relies on laboratory and hospital staff, physicians, and other relevant sources to take the initiative to provide data to the health department, where officials analyze and interpret the information as it comes in. Under active surveillance, public health officials contact people directly to gather data. For example, health department staff could call clinical laboratories each week to ask if any samples of S. pneumoniae tested positive for resistance to penicillin. Active surveillance produces more complete information than passive surveillance, but it takes more time and costs more. Infectious diseases surveillance in the United States depends largely on passive methods of collecting disease reports and laboratory test results. Consequently, the surveillance network relies on the participation of health care providers, private laboratories, and state and local health departments across the nation. Even when states require reporting of specific diseases, experts acknowledge that the completeness of reporting varies by disease and type of provider. Surveillance usually begins when a person with a reportable disease seeks care and the physician—in an effort to determine the cause of the illness—runs a laboratory test, which could be performed in the physician’s office, a hospital, an independent clinical laboratory, or a public health laboratory. Reports of infectious diseases generated by such tests are often sent first to local health departments, where staff check the reports for completeness, contact health care professionals to obtain missing information or clarify unclear responses, and forward the reports to state health agencies. At the state level, state epidemiologists analyze data collected through the disease reporting network, decide when and how to supplement passive reporting with active surveillance methods, conduct outbreak and other disease investigations, and design and evaluate disease prevention and control efforts. They also transmit state data to CDC, providing routine reporting on selected diseases. Many state epidemiologists and laboratory directors provide the medical community with information obtained through surveillance, such as rates of disease incidence or prevailing patterns of antimicrobial resistance. Federal participation in the infectious diseases surveillance network focuses on CDC activities—particularly those of the National Center for Infectious Diseases (NCID), which operates CDC’s infectious diseases laboratories. CDC analyzes the data furnished by states to (1) monitor national health trends, (2) formulate and implement prevention strategies, and (3) evaluate state and federal disease prevention efforts. CDC routinely provides public health officials, medical personnel, and others information on disease trends and analyses of outbreaks. CDC also offers an array of scientific and financial support for state infectious diseases surveillance, prevention, and control programs. Public health and private laboratories are a vital part of the surveillance network because only laboratory test results can definitively identify pathogens. In addition, test results are often an essential complement to a physician’s clinical impressions. According to public health officials, the nation’s 158,000 laboratories are consistent sources of passively reported information for infectious diseases surveillance. Every state has at least one state public health laboratory that conducts testing for routine surveillance or as part of special clinical or epidemiologic studies. State public health laboratories also provide specialized testing for low-incidence, high-risk diseases, such as tuberculosis and botulism. Testing they provide during an outbreak contributes greatly to tracing the spread of the outbreak, identifying the source, and developing appropriate control measures. Epidemiologists rely on state public health laboratories to document trends and identify events that may indicate an emerging problem. Many state laboratories also provide licensing and quality assurance oversight of commercial laboratories. State public health laboratories are increasingly using advanced technology to identify pathogens at the molecular level. These tests provide information that can enable epidemiologists to tell whether individual cases of illness are caused by the same strain of pathogen—information that is not available from clinical records or other epidemiologic methods. Public health officials have used advanced molecular technology to trace the movement of diseases in ways that would not have been possible 5 years ago. For example, DNA fingerprints developed by laboratories in a CDC-sponsored network showed that drug-resistant strains of tuberculosis first found in New York City have spread to other parts of the country. The fingerprints also showed that tuberculosis can be transmitted during brief contact among people—an important discovery that improved treatment and control programs. CDC laboratories provide highly specialized tests not always available in state public health or commercial laboratories and assist states with testing during outbreaks. Specifically, CDC laboratories help diagnose life-threatening, unusual, or exotic infectious diseases; confirm public or private laboratory test results that are difficult to interpret; and conduct research to improve diagnostic methods. While state surveillance and laboratory testing programs are extensive, not all include every significant emerging infection, leaving gaps in the nation’s surveillance network. Our surveys found that almost all states conducted surveillance of tuberculosis, pertussis, hepatitis C, and virulent strains of E. coli; slightly fewer collected information on cryptosporidiosis. About two-thirds collected information on penicillin-resistant S. pneumoniae. Similarly, state public health laboratories commonly performed tests to support state surveillance of tuberculosis, pertussis, cryptosporidiosis, and virulent strains of E. coli. However, over half of the laboratories did not test for hepatitis C, and about two-thirds did not test for penicillin-resistant S. pneumoniae. Over three-quarters of the responding epidemiologists told us that their surveillance programs either leave out or do not focus sufficient attention on important infectious diseases. Antibiotic-resistant diseases, including penicillin-resistant S. pneumoniae and hepatitis C, were among the diseases they cited most often as deserving greater attention. Moreover, our surveys found that about half of the state laboratories used a molecular technology called pulsed field gel electrophoreses (PFGE) to support state surveillance of the diseases we asked about. State and CDC officials believe that most, and possibly all, states should have PFGE because it can be used to study many diseases and greatly improves the ability to detect outbreaks. As part of our surveys and field interviews, we asked state officials to identify the problems they considered most important in conducting surveillance of emerging infectious diseases. The problems they cited fell principally into two categories: staffing and information sharing. State epidemiologists and laboratory directors told us that staffing constraints prevent them from undertaking surveillance and testing for diseases they consider important. Furthermore, laboratory officials noted that advances in scientific knowledge and the proliferation of molecular testing methods have created a need for training to update the skills of current staff. They reported that such training was often either unavailable or inaccessible because of funding or administrative constraints. We found considerable variability among states in laboratory and epidemiology staffing. During fiscal year 1997, states devoted a median of 8 staff years per 1 million population to laboratory testing of infectious diseases, with individual states reporting from 1.3 to 89 staff per 1 million population. The variation in epidemiology staffing was even greater, ranging from 2.1 to 321 in individual states, with a median 14 staff years per 1 million population. Epidemiologists and laboratory officials alike said that public health departments often lack either basic equipment, such as computers and fax machines, or integrated data systems that would allow them to rapidly share surveillance-related information with public and private partners. For health crises that need an immediate response—as when a serious and highly contagious disease appears in a school or among restaurant staff—rapid sharing of surveillance information is critical. Officials most often attributed the lack of computer equipment and integrated data systems to insufficient funding. Without such equipment, some tasks that could be automated must be done by hand. In some cases, the lack of equipment has required data in electronic form to be reverted to paper form. For example, representatives from two large, multistate private clinical laboratories told us that data stored electronically in their information systems had to be converted to paper so it could be reported to local health departments. Our survey responses indicate that state laboratory directors use electronic communications systems much less often than do state epidemiologists. Although most laboratory directors use electronic systems to communicate within their laboratories, they often do not use them to communicate with others. For example, almost 40 percent reported rarely using computerized systems to receive surveillance-related data, and 21 percent used them very little to transmit such data. Even with adequate computer equipment, the difficulty of creating integrated information systems can be formidable. Not only does technology change rapidly, but computerized public health data are stored in thousands of isolated locations, including the record and information systems of public health agencies and health care institutions, individual case files, and data files of surveys and surveillance systems. These independent systems have differing hardware and software structures and considerable variation in how the data are coded, particularly for laboratory test results. CDC alone operates over 100 data systems to monitor over 200 health events, such as diagnoses of specific infectious diseases. Many of these systems collect data from state surveillance programs. CDC’s patchwork of data systems arose, in part, to meet federal and state needs for more detailed information for particular diseases than was usually reported. Public health officials told us that the multitude of databases and data systems, software, and reporting mechanisms burdens staff at state and local health agencies and leads to duplication of effort when staff must enter the same data into multiple systems that do not communicate with one another. Further, the lack of integrated data management systems can hinder laboratory and epidemiologic efforts to control outbreaks. For example, in 1993, the lack of integrated systems impeded efforts to control the hantavirus outbreak in the Southwest. Data were locked into separate databases that could not be analyzed or merged with others, causing public health investigators to analyze paper printouts by hand. Although many state officials are concerned about their staffing and technology resources, public health officials have not developed a consensus definition of the minimum capabilities that state and local health departments need to conduct infectious diseases surveillance. For example, according to CDC and state health officials, there are no standards for the types of tests state public health laboratories should be able to perform; nor are there widely accepted standards for the epidemiological capabilities state public health departments need. Public health officials have identified a number of elements that might be included in a consensus definition, such as the number and qualifications of laboratory and epidemiology staff; the pathogens that each state laboratory should be able to identify and, where relevant, test for antibiotic resistance; and laboratory and information-sharing technology each state should have. CSTE, the Association of Public Health Laboratories, and CDC have begun collaborating to define the staff and equipment components of a national surveillance system for infectious diseases and other conditions. They plan to develop agreements about the laboratory and epidemiology resources needed to conduct surveillance, diseases that should be under surveillance, and the information systems needed to share surveillance data. According to state and federal officials, this consensus would give state and local health agencies the basis for setting priorities for their surveillance efforts and determining the resources needed to implement them. CDC provides state and local health departments with a wide range of technical, financial, and staff resources. Many state laboratory directors and epidemiologists said such assistance has been essential to their ability to conduct infectious diseases surveillance and to take advantage of new laboratory technology; however, a small number of laboratory directors and epidemiologists believe CDC’s assistance has not significantly increased their ability to conduct surveillance of emerging infections. Yet many state officials indicated that improvements are needed, particularly in the area of information-sharing systems. Many state laboratory directors and epidemiologists told us that CDC’s testing, consultation, and training services are critical to their surveillance efforts. More than half of those responding to our surveys indicated that these three services greatly or significantly improved their state’s ability to conduct surveillance. State officials indicated that CDC’s testing for rare pathogens and the ability to consult with experienced CDC staff are important, particularly for investigating cases of unusual diseases, and that CDC’s training was even more significant for improving their ability to conduct surveillance of emerging infections. Over 70 percent of epidemiologists responding to our survey said that when they need assistance, knowledgeable staff at CDC are easy to locate, but many noted that help with matters involving more than one CDC unit is difficult to obtain. Many state officials said that this problem arose when staff in different units did not communicate well with one another. One official described CDC’s units as separate towers that do not interact. State officials and survey respondents also said they would like CDC to provide more timely test results in non-urgent situations and additional training in new laboratory techniques. Most survey respondents said that NCID’s disease-specific grants and epidemiology and laboratory capacity grants had made great or significant improvements in their ability to conduct surveillance of emerging infectious diseases. For example, after state laboratories began receiving funds from CDC’s tuberculosis grant program—which go to programs in all states and selected localities—they markedly improved their ability to rapidly identify the disease and indicate which, if any, antibiotics could be used effectively in treatment. State laboratory officials attributed this improvement to the funding and training they received from CDC. In contrast, only eight states receive CDC funding for active surveillance and testing for penicillin-resistant S. pneumoniae. Where almost all states and most state laboratories reported that they monitor antibiotic-resistance in tuberculosis, far fewer reported monitoring penicillin-resistant S. pneumoniae. Moreover, while all but one state require health care providers to submit tuberculosis reports, fewer than half require reporting of penicillin-resistant S. pneumoniae. Over the past two decades, CDC has developed and made available to states several general and disease-specific information management and reporting programs. State and federal officials we spoke with said CDC’s systems have limited flexibility for adapting to state program needs—one reason states have developed their own information management systems. Officials told us that two systems used by most laboratory directors and epidemiologists often cannot share data with each other or with other CDC-or state-developed systems. CDC officials responsible for these programs said that the most recent versions can share data more readily with other systems, but the lack of training in how to use the programs and high staff turnover at state agencies may limit the number of state staff able to use the full range of program capabilities. Many state officials complained about a substantial drain on scarce staff time to enter and reconcile data into multiple systems, such as their own system plus one or more CDC-developed systems. The inability to share data between systems also hinders identifying multiple records on one case and undermines efforts to improve reporting by providers. In response to state and local requests for greater integration of systems, CDC established a board to formulate and enact policy for integrating public health information and surveillance systems. The board brings together federal and state public health officials to focus on issues such as data standards and security, assessing hardware and software used by states, and identifying gaps in CDC databases. CDC and the states have made progress in developing more efficient information-sharing systems through one of CDC’s grant programs: the Information Network for Public Health Officials (INPHO). INPHO is designed to foster communication between public and private partners, make information more accessible, and allow for rapid and secure exchange of data. By 1997, 14 states had begun INPHO projects. Some had combined these funds with other CDC grant moneys to build statewide networks linking state and local health departments and, in some cases, private laboratories. Integrated systems can dramatically improve communication. For example, in Washington, electronic information sharing systems reduced passive reporting time from 35 days to 1 day and gave local authorities access to health data for analysis. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions you or other members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed public health surveillance of emerging infectious diseases, focusing on the role of state laboratories. GAO noted that: (1) surveillance of and testing for important emerging infectious diseases are not comprehensive in all states; (2) GAO found that most states conduct surveillance of five of the six emerging infections GAO asked about, and state public health laboratories conduct tests to support state surveillance of four of the six; (3) however, over half of state laboratories do not conduct tests for surveillance of penicillin-resistant S. pneumoniae and hepatitis C; (4) also, most state epidemiologists believe their surveillance programs do not sufficiently study antibiotic-resistant and other diseases they consider important; (5) many state laboratory directors and epidemiologists reported that inadequate staffing and information-sharing problems hinder their ability to generate and use laboratory data in their surveillance; (6) however, public health officials have not agreed on a consensus definition of the minimum capabilities that state and local health departments need to conduct infectious diseases surveillance; (7) this lack of consensus makes it difficult for policymakers to assess the adequacy of existing resources or to evaluate where investments are needed most; (8) most state officials said the Centers for Disease Control and Prevention's (CDC) testing and consulting services, training, and grant funding support are critical to their efforts to detect and respond to emerging infections; (9) however, both laboratory directors and epidemiologists were frustrated by the lack of integrated systems within CDC and the lack of integrated systems linking them with other public and private surveillance partners; and (10) CDC's continued commitment to integrating its own data systems and to helping states and localities build integrated electronic data and communication systems could give state and local public health agencies vital assistance in carrying out their infectious diseases surveillance and reporting responsibilities.
Strategic plans developed by regional organizations can be effective tools to focus resources and efforts to address problems. Effective plans often contain such features as goals and objectives that are measurable and quantifiable. These goals and objectives allow problems and planned steps to be defined specifically and progress to be measured. By specifying goals and objectives, plans can also give planners and decision makers a structure for allocating funding to those goals and objectives. A well- defined, comprehensive strategic plan for the NCR is essential for assuring that the region is prepared for the risks it faces. The Homeland Security Act established the Office of National Capital Region Coordination within the Department of Homeland Security. The ONCRC is responsible for overseeing and coordinating federal programs for and relationships with state, local, and regional authorities in the NCR and for assessing, and advocating for, the resources needed by state, local and regional authorities in the NCR to implement efforts to secure the homeland. One of the ONCRC mandates is to coordinate with federal, state, local, and regional agencies and the private sector in NCR on terrorism preparedness to ensure adequate planning, information sharing, training, and execution of domestic preparedness activities among these agencies and entities. In our earlier work, we reported that ONCRC and the NCR faced three interrelated challenges in managing federal funds in a way that maximizes the increase in first responder capacities and preparedness while minimizing inefficiency and unnecessary duplication of expenditures. These challenges included the lack of a set of accepted benchmarks (best practices) and performance goals that could be used to identify desired goals and determine whether first responders have the ability to respond to threats and emergencies with well-planned, well-coordinated, and effective efforts that involve police, fire, emergency medical, public health, and other personnel from multiple jurisdictions; a coordinated regionwide plan for establishing first responder performance goals, needs, and priorities, and assessing the benefits of expenditures in enhancing first responder capabilities; and a readily available, reliable source of data on the funds available to first responders in the NCR and their use. Without the standards, a regionwide plan, and data on spending, we observed it would be extremely difficult to determine whether NCR first responders were prepared to effectively respond to threats and emergencies. Regional coordination means the use of governmental resources in a complementary way toward goals and objectives that are mutually agreed upon by various stakeholders in a region. Regional coordination can also help to overcome the fragmented nature of federal programs and grants available to state and local entities. Successful coordination occurs not only vertically among federal, state, and local governments, but also horizontally within regions. The effective alignment of resources for the security of communities could require planning across jurisdictional boundaries. Neighboring jurisdictions may be affected by an emergency situation in many ways, including major traffic or environmental disruptions, activation and implementation of mutual aid agreements, acceptance of evacuated residents, and treating casualties in local hospitals. Although work has continued on a NCR strategic plan for the past 2 years, a completed plan is not yet available to guide decision making such as assessment of NCR’s strategic priorities and funding needs and aid for NCR jurisdictions in ascertaining how the NCR strategic plan complements their individual or combined efforts. In May 2004, we recommended that the Secretary of DHS work with the NCR jurisdictions to develop a coordinated strategic plan to establish goals and priorities to enhance first responder capacities that can be used to guide the use of federal emergency preparedness funds, and the department agreed to implement this recommendation. A related recommendation—that DHS monitor the plan’s implementation to ensure that funds are used in a way that promotes effective expenditures that are not unnecessarily duplicative—could not be implemented until the final strategic plan was in place. In July 2005, we testified that, according to a DHS ONCRC official, a final draft for review had been completed and circulated to key stakeholders. The plan was to feature measurable goals, objectives, and performance measures. ONCRC officials state that past references to a NCR strategic plan reflect availability of the core elements of the NCR strategic plan—the mission, vision, guiding principles, long-term goals, and objectives, but not a complete plan. They told us that these core elements, along with other information, will need to be compiled into a strategic planning document. ONCRC officials said that NCR leadership had elected to make the core elements available but to concentrate on preparing other planning and justification documents required for the fiscal year 2006 DHS grant process. NCR planning timelines indicate this decision was made in September 2005. Because a strategic plan was not available, ONCRC officials provided us with several documents, which they have said that taken as a whole, constitute the basic elements of NCR’s strategic plan. These documents include a November 18, 2005, NCR Plenary Session PowerPoint presentation containing information on NCR strategic goals, objectives, and initiatives; a February 1, 2006, National Capital Region Target Capabilities and NCR Projects Work Book; the March 2, 2006, District of Columbia and National Capital Region Fiscal Year 2006 Homeland Security Grant Application Program and Capability Enhancement Plan; the March 2, 2006, National Capital Region Initiatives; and the Fiscal Year 2006 NCR Homeland Security Grant Program Funding Request Investment Justification, submitted to DHS in March 2006. According to ONCRC officials, a complete strategic plan is awaiting integration of additional information that in some cases is not yet complete. These include an Emergency Management Accreditation Program (EMAP) assessment of all local jurisdictions in the NCR and regional-level activities, which, according to the ONCRC, is completed but will not be available until sometime in April; the peer review of the status of state and urban area emergency operations plans after Hurricane Katrina, whose completion is anticipated in April 2006; and the fiscal year 2006 homeland security program grant enhancement plan for funding, which was completed in early March 2006. ONCRC officials estimate that after April 2006, it will take approximately 90 more days to integrate these documents and the core framework of the strategic plan, plus approximately 60 days for final review and coordination by the NCR leadership. Thus, an initial strategic plan will not be available until at least September or October 2006. NCR strategic planning should reflect both national and regional priorities and needs. ONCRC officials have said that the November 18, 2005, NCR plenary session PowerPoint presentation represents the vision, mission, and core goals and objectives of the NCR’s strategic plan. If the NCR’s homeland security grant program funding documents prepared for DHS are used extensively in NCR strategic planning, a NCR strategic plan might primarily reflect DHS priorities and grant funding—national priorities— and not regionally developed strategic goals and priorities. NCR’s current goals and objectives are shown in table 1. The other four documents that ONCRC represents as constituting the NCR strategic plan were developed in response to federal requirements under the National Preparedness Goal and to support the NCR’s federal funding application. Required by Homeland Security Presidential Directive 8, the National Preparedness Goal is a national domestic all-hazards preparedness goal intended to establish measurable readiness priorities and targets. The fiscal year 2006 Homeland Security Grant Program (HSGP) integrates the State Homeland Security Program, the Urban Areas Security Initiative, the Law Enforcement Terrorism Prevention Program, the Metropolitan Medical Response System, and the Citizen Corps Program. For the first time, starting with the fiscal year 2006 HSGP, DHS is using the National Preparedness Goal to shape national priorities and focus expenditures for the HSGP. According to DHS, the combined fiscal year 2006 HSGP Program Guidance and Application Kit streamlines efforts for states and urban areas in obtaining resources that are critical to building and sustaining capabilities to achieve the National Preparedness Goal and implement state and urban area homeland security strategies. All states and urban areas were required to align existing preparedness strategies within the National Preparedness Goal’s eight national priorities. States and urban areas were required to assess their preparedness needs by reviewing their existing programs and capabilities and use those findings to develop a plan and formal investment justification outlining major statewide, substate, or interstate initiatives for which they will seek funding. According to DHS, these initiatives are to focus efforts on how to build and sustain programs and capabilities within and across state boundaries while aligning with the National Preparedness Goal and national priorities. It is, of course, important and necessary that the ONCRC, and other regional and local jurisdictions, incorporate the DHS’s National Preparedness Goal and related target capabilities into their strategic planning. The target capabilities are intended to serve as a benchmark against which states, regions, and localities can measure their own capabilities. However, these national requirements are but one part of developing regional preparedness, response, and recovery assessments and funding priorities specific to the NCR. The NCR’s strategic plan should provide the framework for guiding the integration of DHS requirements into the NCR’s overall efforts. While the NCR strategic plan is not complete, our preliminary review of the NCR initiatives developed to implement NCR’s strategic goals and objectives presented in ONCRC documents indicates they are not completely addressed in the DHS HSGP documents. Using the November 18, 2005, PowerPoint presentation as our primary framework, we identified whether the NCR’s 39 individual regional initiatives were specifically supported in whole or in part by programs or investments in the fiscal year 2006 HSGP documents (enhancement plan and investment justification) prepared for DHS. Our preliminary analysis indicates that regional initiatives defined under NCR strategic goals and objectives have some coverage—individual programs or projects—in the NCR documents prepared for DHS HSGP funding, but not complete coverage. We found that of the NCR’s 16 priority initiatives, 10 were partially addressed in the enhancement plan and 12 were partially addressed in the investment justification. Of the other 23 NCR initiatives, 8 were partially addressed in the enhancement plan and 12 were partially addressed in the investment justification. Implementation of regional initiatives not covered by HSGP funding likely would require NCR jurisdictions acting individually or in combination with others. Our preliminary work did not include an assessment of individual jurisdictional efforts to implement the NCR initiatives to determine if uncovered initiatives, particularly those considered priority initiatives, might be addressed by one or more of the NCR jurisdictions. Further work would be required to determine to what extent, if any, the NCR initiatives are addressed in other federal funding applications or individual NCR jurisdictional homeland security initiatives. As I stated earlier, ONCRC officials told us a complete NCR strategic plan is awaiting information from the EMAP assessment, DHS’s peer review of the status of emergency operations plans in the aftermath of Hurricane Katrina, and the fiscal year 2006 homeland security grant program enhancement plan for funding. This information may further emphasize federal priorities in the regional planning process. However, information from these sources should complement the region’s own assessment of preparedness gaps and the development of strategic goals, objectives, and initiatives. Officials from the District of Columbia, Virginia, and Maryland emphasized this point when they testified before this committee in July 2005. At that time, they said that the regional strategic plan would be a comprehensive document that defined priorities and objectives for the entire region without regard to any specific jurisdiction, discipline, or funding mechanisms. In our view, a NCR plan should complement the plans of the various jurisdictions within NCR. In the aftermath of the September 11, 2001 terrorist attacks and the creation of the ONCRC, we would have expected that the vast majority of this assessment work should have been completed. The NCR is considered a prime target for terrorist events, and other major events requiring a regional response can be anticipated, such as large, dangerous chemical spills. A complete NCR strategic plan based on the November 18 PowerPoint presentation could be strengthened in several ways. In earlier work we have identified characteristics that we consider to be desirable for a national strategy that may be useful for a regional approach to homeland security strategic planning. The desirable characteristics, adjusted for a regional strategy, are purpose, scope, and methodology that address why the strategy was produced, the scope of its coverage, and the process by which it was developed; problem definition and risk assessment that address the particular regional problems and threats the strategy is directed towards; goals, subordinate objectives, activities, and performance measures that address what the strategy is trying to achieve, steps to achieve those results, as well as the priorities, milestones, and performance measures to gauge results; resources, investments, and risk management that address what the strategy will cost, the sources and types of resources and investments needed, and where resources and investments should be targeted by balancing risk reductions and costs; organizational roles, responsibilities, and coordination that address who will be implementing the strategy, what their roles will be compared to those of others, and mechanisms for them to coordinate their efforts; and integration and implementation that address how a regional strategy relates to other strategies’ goals, objectives and activities, and to state and local governments within their region and their plans to implement the strategy. According to the ONCRC, the November 18 PowerPoint presentation contains the core elements of the NCR’s strategic plan—the mission, vision, guiding principles, long-term goals, and objectives. Our preliminary review of the presentation indicates it reflects many of the characteristics we have defined as desirable for a strategy. The presentation includes some material on the purpose, scope, and methodology underlying the presentation; what it covers; and how it was developed. For example, the presentation contains a detailed timeline of key activities in the execution of the strategic plan and how initiatives were prioritized. Particular regional problems and performance gaps are described, including a section on regionwide weaknesses and gaps such as the lack of a regionwide risk assessment framework and inadequate response and recovery for special needs populations. These gaps are cross-referenced to priority initiatives. Specific goals, objectives, and initiatives are in the presentation, cross-referenced to the regional gaps. Some initiative descriptions identify if a cost is either high, medium, or low with more detailed cost information summarized in other places. Our preliminary review indicates that as the ONCRC fleshes out the November 18 PowerPoint presentation into an initial, complete strategic plan, improvements might be made in (1) initiatives that will accomplish objectives under the strategic goals, (2) performance measures and targets that indicate how the initiatives will accomplish identified strategic goals, (3) milestones or time frames for initiative accomplishment, (4) information on the resources and investment for each initiative, and (5) organizational roles, responsibilities, and coordination, and integration and implementation plans. A discussion of how these elements could be strengthened follows. A NCR strategic plan could more fully develop initiatives to accomplish objectives under the strategic goals. For example, the presentation contains several objectives that have only one initiative. A single initiative may not ensure that objectives are accomplished, and it may merely be restating the objective itself. For example, there is only one initiative (regional strategic planning and decision making process enhancements) for Goal 1’s first objective (enhancing and adapting the framework for strategic planning and decision making to achieve an optimal balance of capabilities across the NCR). The initiative in large part restates the objective. This initiative might be replaced by more specific initiatives or the objective restated and additional initiatives proposed. Other objectives in the November 18 PowerPoint presentation provide a more complete picture of initiatives intended to meet the objective. For any future plan, these initiatives should be reviewed to determine if the current initiatives will fully meet the results expected of the objectives. A NCR strategic plan could more fully measure initiative expectations by improving performance measures and targets. First, in some cases, the performance measures will not readily lend themselves to actual quantitative or qualitative measurement through a tabulation, a calculation, a recording of activity or effort, or an assessment of results that is compared to an intended purpose. Additional measures might be necessary. For example, Goal 1, Objective 1, Initiative 1 (regional strategic planning and decision-making process) includes measures such as (1) the decision-making system is well understood by all stakeholders based on changed behaviors and (2) time and resources required of stakeholders in the region to participate in the decision-making process is more efficient. These could be either refined for more direct measurement or additional measures posed, such as specifying behaviors for assessment or what parts of the process might be assessed for efficiency. Other measures in the document might serve as examples of more direct measurement, such as those that assess accomplishments using percentages in Goal 2, Objective 4, Initiative 1 (increasing civic involvement in all phases of disaster preparedness). Second, a strategic plan could be improved by (1) expanding the use of outcome measures and targets in the plan to reflect the results of its activities and (2) limiting the use of other types of measures. ONCRC officials said that the performance measures in the November 18 PowerPoint presentation had a greater emphasis on tracking outcomes, rather than inputs. They stated that as programs and projects are funded and implemented, a more thorough effort to develop associated measures for each will be undertaken. With regard to revising measures to reflect funded programs and projects, we would suggest NCR officials focus on measuring outcomes of programs and projects to meet strategic goals and objectives. Our preliminary analysis indicates that several measures are outcome- oriented, such as those for Goal 2, Objective 4, Initiative 1 (increase civic involvement in all phases of disaster preparedness) that has outcome measures such as the percentage of the population that has taken steps to develop personal preparedness and the percentage of the population familiar with workplace, school, and community emergency plans. However, the majority of the presentation’s performance measures and targets are process- or output-oriented and may not match the desired result of the initiative. For example, the Goal 1, Objective 4, Initiative 2 (facilitating practitioner priorities into the program development process) desired outcomes are (1) an easily understood process for participation and feedback of the practitioner stakeholder communities to influence programmatic initiatives and priorities defined in Goal Groups 2, 3, and 4 and (2) an awareness and increased participation in the range of resource opportunities. Measures for this initiative include communication across Emergency Support Functions (ESFs), an accountability chart, and governance guidance document show the feedback loop between ESFs and Senior Policy Group/Chief Administrative Officer (SPG/CAO) and Regional Working Groups. Such measures identify completed activities or tasks, not how well understand the process is. A fourth measure for this initiative—understanding/agreeing on roles, responsibility, and accountability—might closer to measuring the desired outcome. Third, many initiatives do not have performance targets. For example, targets are missing for all or some measures for initiatives under Goal 1, Objectives 1, 3, 4, and 5. Other targets are unclear. For example, one measure for both Goal 1, Objective 3, Initiative 1 (tasks and capabilities for the NCR) and Goal 1, Objective 3, Initiative 2 (gap analysis, recommendations, and appropriate actions) is the progress toward closing the gap between baseline and target capabilities. The target is “what we think we need to accomplish in HSPD 7/8.” Any targets such as this would require clarification if progress toward results is to be assessed. A future NCR strategic plan could also be strengthened by including more complete time frames for initiative accomplishment, including specific milestones. In some cases, the time frame description is missing or is inconsistent with timeframes provided within performance measure descriptions that generally cover activities or tasks. For example, Goal 3, Objective 1, Initiative 1 (region prevention and mitigation framework) has a time frame for fall 2006, but measures include targets in 2007. In several instances, measures of tasks or activities include milestones, but an overall time frame is not indicated. For example, Goal 3, Objective 3, Initiative 1 (critical infrastructure and high-risk targets risk assessments) and Goal 4, Objective 1, Initiative 1 (corrective action program for gaps) do not have timeframes identified, but measures have dates extending into 2007 and 2009 respectively. Time frames should also match the initiative. In some cases, it is unclear if the initiative description should be expanded to encompass activities that appear outside the scope of the initiative as written, but result in the time frame for the overall initiative. For example, Goal 3, Objective 1, Initiative 3 (health surveillance, detection, and mitigation functions plan) has an overall time frame of December 2010, but the 2010 date reflects implementation of a patient tracking system. In the list of measures, the plan itself is targeted for December 2008. Either the initiative description could be changed to include the system or the patient tracking system measure could be removed or revised. A future NCR strategic plan could provide fuller information on the resources and investments associated with each initiative. For example, each initiative in the November 18 PowerPoint presentation has a section for cost and cost factors. However, there is not an explanation in the document as to what cost categories of high, medium, or low mean in terms of dollar ranges. ONCRC officials told us that these descriptions should be considered more notional in nature, with a low usually meaning well under $1 million and those rated high in the tens of millions. In many cases, the categorization of cost for an initiative is missing from the November 18 PowerPoint presentation initiative sections. More specific cost information by initiative, such as the funded and unfunded grant information that is provided in a summary format, would facilitate decision making in comparing trade-offs as options are considered. A plan also could be improved by including the sources of funding for the anticipated costs, whether federal, state, or local, or a combination of multiple sources. Last, any future NCR strategic plan could expand on organizational roles, responsibilities, coordination, and integration and implementation plans. Organizational roles, responsibilities, and coordination for each initiative would clarify accountability and leadership for completion of the initiative. The plan might also include information on how the plan will be integrated with the strategic plans of NCR jurisdictions and that of the ONCRC and plans to implement the regional strategy. There is no more important element in results-oriented management than the effort of strategic planning. This effort is the starting point and foundation for defining what an organization seeks to accomplish, identifying the strategies it will use to achieve desired results, and then determining how well it succeeds in reaching results-oriented goals and achieving objectives. Establishing clear goals, objectives, and milestones; setting performance goals; assessing performance against goals to set priorities; and monitoring the effectiveness of actions taken to achieve the designated performance goals are all part of the planning process. If done well, strategic planning is not a static or occasional event, but rather a dynamic and inclusive process. Continuous strategic planning provides the foundation for the most important things an organization does each day, and fosters informed communication between the organization and those affected by or interested in the organization’s activities. We appreciate the fact that strategic plans, once issued, are living documents that require continual assessment. There is an understandable temptation to delay issuing a strategic plan at some point in the ongoing strategic planning process until the plan is considered perfect and all information has been collected, analyzed, and incorporated into the plan. However, failure to complete an initial strategic plan makes it difficult for decision makers to identify and assess NCR’s first strategic goals, objectives, priorities, measures, and funding needs, and how resources can be leveraged across the region as events warrant. We continue to recommend that the Secretary of DHS work with the NCR jurisdictions to quickly complete a coordinated strategic plan to establish regional goals and priorities. - - - - - That concludes my statement, Mr. Chairman. I would be pleased to respond to any questions you or other members of the Committee may have. For questions regarding this testimony, please contact William O. Jenkins, Jr. at (202) 512-8757, email jenkinswo@gao.gov. Sharon L. Caudle also made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Congress asked GAO to provide comments on the National Capital Region's (NCR) strategic plan. GAO reported on NCR strategic planning, among other issues, in May 2004 and September 2004, testified before the House Committee on Government Reform in June 2004, and testified before the Subcommittee on Oversight of Government Management, the Federal Workforce, and the District of Columbia in July 2005. In this testimony, we addressed completion of the NCR strategic plan, national and regional priorities, and strengthening any plan that is developed. Among its other statutory responsibilities, the Office of National Capital Region Coordination is charged with coordinating with NCR agencies and other entities to ensure adequate planning, information sharing, training, and execution of domestic preparedness activities among these agencies and entities. In May 2004 and again in July 2005, we recommended that the ONCRC complete a regional strategic plan to establish goals and priorities for enhancing first responder capacities that could be used to guide the effective use of federal funds. Although work has continued on a NCR strategic plan for the past 2 years, a completed plan is not yet available. According to NCR officials, completion of the plan requires integrating information and analyses from other documents completed or nearly completed, and a plan may not be available before September or October of 2006. The NCR's strategic planning should reflect both national and regional priorities and needs. The majority of the individual documents ONCRC provided to us as representing components for its strategic plan were developed in response to Department of Homeland Security fiscal year 2006 grant guidance to support the NCR's fiscal year 2006 grant application. It is appropriate and necessary that the NCR address national priorities, but the NCR's strategic plan should not be primarily driven by these requirements. It should integrate national and regional priorities and needs. A well-defined, comprehensive strategic plan for the NCR is essential for assuring that the region is prepared for the risks it faces. A November 18, 2005, NCR PowerPoint presentation describes the NCR's vision, mission, goals, objectives, and priority initiatives. That presentation includes some elements of a good strategic plan, including some performance measures, target dates, and cost estimates. A completed NCR strategic plan should build on the current elements that the NCR has developed and strengthen others based on the desirable characteristics of a national strategy that may also be useful for a regional approach to homeland security strategic planning. As it completes its strategic plan, the NCR could focus on strengthening (1) initiatives that will accomplish objectives under the NCR strategic goals, (2) performance measures and targets that indicate how the initiatives will accomplish identified strategic goals, (3) milestones or timeframes for initiative accomplishment, (4) information on the resources and investments for each initiative, and (5) organizational roles, responsibilities, and coordination, and integration and implementation plans.
When Social Security was enacted in 1935, the nation was in the midst of the Great Depression. About half of the elderly depended on others for their livelihood, and roughly one-sixth received public charity. Many had lost their savings. Social Security was created to help ensure that the elderly would have adequate retirement incomes and would not have to depend on welfare. It would provide benefits that workers had earned because of their contributions and those of their employers. When Social Security started paying benefits, it responded to an immediate need to bolster the income of the elderly. The Social Security benefits that early beneficiaries received significantly exceeded their contributions, but even the very first beneficiaries had made some contributions. Initially, funding Social Security benefits required relatively low payroll taxes because very few of the elderly had earned benefits under the new system. Increases in payroll taxes were always anticipated to keep up with the benefit payments as the system matured and more retirees received benefits. Virtually from the beginning, Social Security was financed on this type of pay-as-you-go basis, with any single year’s revenues collected primarily to fund that year’s benefits. The Congress had rejected the idea of advance funding for the program, or collecting enough revenues to cover future benefit rights as workers accrued them. Many expressed concern that if the federal government amassed huge reserve funds, it would find a way to spend them. Over the years, both the size and scope of the program have changed, and periodic adjustments have been necessary. In 1939, coverage was extended to dependents and survivors. In the 1950s, state and local governments were given the option of covering their employees. The Disability Insurance program was added in 1956. Beginning in 1975, benefits were automatically tied to the Consumer Price Index to ensure that the purchasing power of benefits was not eroded by inflation. These benefit expansions led to higher payroll tax rates in addition to the increases stemming from the maturing of the system. Moreover, the long-term solvency of the program has been reassessed annually. Changes in demographic and economic projections have required benefit and revenue adjustments to maintain solvency, such as the amendments enacted in 1977 and 1983. Profound demographic trends are now contributing to Social Security’s long-term financing shortfall. As a share of the total U.S. population, the elderly population grew from 7 percent in 1940 to 13 percent in 1996; this share is expected to increase further to 20 percent by 2050. As it ages, the baby-boom generation will increase the size of the elderly population. However, other demographic trends are at least as important. Life expectancy has increased continually since the 1930s, and further improvements are expected. Moreover, the fertility rate has declined from 3.6 children per woman in 1960 to around 2 children per woman today and is expected to level off at about 1.9 by 2020. Combined, increasing life expectancy and falling fertility rates mean that fewer workers will be contributing to Social Security for each aged, disabled, dependent, or surviving beneficiary. While 3.3 workers support each Social Security beneficiary today, only 2 workers are expected to be supporting each beneficiary by 2030. As a result of these demographic trends, Social Security revenues are expected to be about 14 percent less than expenditures over the next 75-year period, and demographic trends suggest that this imbalance will grow over time. By 2030, the Social Security trust funds are projected to be depleted. From then on, Social Security revenues are expected to be sufficient to pay only about 70 to 75 percent of currently promised benefits, given currently scheduled tax rates and the Social Security Administration’s (SSA) intermediate assumptions about demographic and economic trends. In 2031, the last members of the baby-boom generation will reach age 67, when they will be eligible for full retirement benefits under current law. Restoring Social Security’s long-term solvency will require some combination of increased revenues and reduced expenditures. A variety of options are available within the current structure of the program, such as raising the retirement age, reducing inflation adjustments, increasing payroll tax rates, and investing trust fund reserves in higher-yielding securities. However, some proposals would go beyond restoring long-term solvency and would fundamentally alter the program structure by setting up individual retirement savings accounts and requiring workers to contribute to them. Retirement income from these accounts would usually replace a portion of Social Security benefits. Some proposals would attempt to produce a net gain in retirement income. The combination of mandated savings deposits and revised Social Security taxes would be greater than current Social Security taxes, in most cases. Helping ensure adequate retirement income has been a fundamental goal of Social Security. While Social Security was never intended to guarantee an adequate income, it provides an income base upon which to build. Virtually all reform proposals also pay some attention to “income adequacy,” but some place a different emphasis on it relative to the goal of “individual equity,” which seeks to ensure that benefits bear some relationship to contributions. Some proponents of reform believe that increasing the role of individual retirement savings could improve individual equity without diminishing income adequacy. The current Social Security program seeks to ensure adequate retirement income in various ways. First, it makes participation mandatory, which guards against the possibility that some people would not otherwise save enough to have even a minimal retirement income. Reform proposals also generally make participation mandatory. Second, the current Social Security benefit formula redistributes income from high earners to low earners to help keep low earners out of poverty. It accomplishes this by replacing a larger share of lifetime earnings for low earners and a smaller share for high earners. In addition, Social Security helps ensure adequate income by providing benefits for dependent and surviving spouses and children who may not have the work history required to earn adequate benefits. Also, it automatically ensures that the purchasing power of benefits keeps pace with inflation, unlike most employer pension plans or individually purchased annuities. While the Social Security benefit formula seeks to ensure adequacy by redistributing income, it also promotes some degree of individual equity by ensuring that benefits are at least somewhat higher for workers with higher lifetime earnings. In helping ensure adequate retirement income, Social Security has contributed to reducing poverty among the elderly. (See fig. 1.) Since 1959, poverty rates for the elderly have dropped by two-thirds, from 35 percent to less than 11 percent in 1996. While they were higher than rates for children and for working-age adults (aged 18 to 64), they are now lower than for either group. For more than half the elderly, income other than Social Security was less than the poverty threshold in 1994. While Social Security provides a strong foundation for retirement income, it is only a foundation. In 1994, it provided an average of roughly $9,200 to all elderly households. Median Social Security benefits have historically been very close to the poverty threshold. Elderly households with below-average income rely heavily on Social Security, which provided 80 percent of income for 40 percent of elderly households in 1994. (See fig. 2.) One in seven elderly Americans has no income other than Social Security. Pockets of poverty remain. Women, minorities, and persons aged 75 and older are much more likely to be poor than other elderly persons. For example, compared with 11 percent for all elderly persons (aged 65 and older) in 1996, poverty rates were 23 percent for all elderly women living alone, roughly 25 percent for elderly blacks and Hispanics, and 31 percent for black women older than 75. Unmarried women make up more than 70 percent of poor elderly households, although they account for only 45 percent of all elderly households. Proposals that would increase the extent to which workers save for their own retirement would reduce income redistribution because any contributions to individual accounts that would otherwise go to Social Security would not be available for redistribution. Still, proponents of individual accounts assert that virtually all retirees would be at least as well off as they are now and that such reforms would improve individual equity. Citing historical investment returns, they argue that the rates of return that workers could earn on their individual retirement savings would be much higher than the returns they implicitly earn under the current system and that their retirement incomes could be higher as a result. Nevertheless, earning such higher returns would require investing in riskier assets such as stocks. Income adequacy under such reforms would depend on how workers invest their savings and whether they actually earn higher returns. It would also depend on what degree of Social Security coverage and its income redistribution would remain after reform. In addition to examining the effects of reform proposals on all retirees generally, attention should be paid to how they affect specific subpopulations, especially those that are most vulnerable to poverty, including women, widows, minorities, and the very old. Reform proposals vary considerably in their effects on such subpopulations. For example, since men and women typically have different earnings histories, life expectancies, and investment behaviors, reforms could exacerbate differences in benefits that already exist. An individual savings approach that permits little redistribution would on average generate smaller savings balances at retirement for women, who tend to have lower earnings from both employment and investments, and these smaller balances would need to last longer because women have longer life expectancies. The balance between income adequacy and individual equity also influences how much risk and responsibility are borne by individuals and the government. Workers face a variety of risks regarding their retirement income security. These include individually based risks, such as how long they will be able to work, how long they will live, whether they will be survived by a spouse or other dependents, how much they will earn and save over their lifetimes, and how much they will earn on retirement savings. Workers also face some collective risks, such as the performance of the economy and the extent of inflation. Different types of retirement income embody different ways of assigning responsibility for these risks. Social Security was based on a social insurance model in which the society as a whole through the government largely takes responsibility for all these risks to help ensure adequate income. This tends to minimize risks to the individuals and in the process lowers the rate of return they implicitly earn on their retirement contributions. Social Security provides a benefit that provides income to workers who become disabled and to workers who reach retirement, for as long as they live, and for their spouse and dependents. The government takes responsibility for collecting and managing the revenues needed to pay benefits. By redistributing income, Social Security helps protect workers against low retirement income that stems from low lifetime earnings. Social Security pays a pension benefit that is determined by a formula that takes lifetime earnings into account. This type of pension is called a defined benefit pension. Many employer pensions are also defined benefit pensions. These pensions help smooth out variations in benefit amounts that can arise from year to year because of economic fluctuations. Defined benefit pension providers assume investment risks and some of the economic risks and take responsibility for investing and managing pension funds and ensuring that contributions are adequate to fund promised benefits. In contrast, defined contribution pensions, such as 401(k) accounts, base retirement income solely on the amount of contributions made and interest earned. Such pensions resemble individual savings. Retirement savings by individuals place virtually all the risk and responsibility on individuals but give them greater freedom and control over their income. Under reform proposals that increase the role of individual savings, the government role would primarily be to make sure that workers contribute to their retirement accounts and to regulate the management of those accounts. Workers would be responsible for choosing how to invest their savings and would assume the investment and economic risks. Some proposals would allow workers to invest only in a limited number of “indexed” investment funds, which like some mutual funds are managed so they mirror the performance of market indexes like the Standard and Poor 500. Some proposals would require workers to buy an annuity at retirement, while others would place few restrictions on how workers use their funds in retirement. Social Security places relatively greater emphasis on adequacy and less on individual equity by providing a way for all members of society to share all the risks. An individual retirement savings approach places relatively less emphasis on adequacy and more on individual equity by making retirement income depend more directly on each person’s contributions and management of the funds. Reform proposals that would increase the role of individual savings would change the overall mix of different types of retirement income and with it the relative emphasis on adequacy and individual equity embodied by that mix. In addition to changing the relative roles of Social Security and individual savings, such Social Security reform could indirectly affect other sources of retirement income and related public policies. For example, raising Social Security’s retirement age or cutting its benefit amounts could affect employer pensions. Some employers pay supplements to their pensions until retirees start to receive Social Security income, or they set their pension benefits relative to Social Security’s. Employers might terminate their pension plans rather than pay increased costs. Reforms would also interact with other income support programs such as Social Security’s Disability Insurance or the Supplemental Security Income public assistance program. For example, raising the retirement age could lead more older workers to apply for Social Security’s disability benefits because those benefits would be greater than retirement benefits, if they qualify. No matter what shape Social Security reform takes, restoring long-term solvency will require some combination of benefit reductions and revenue increases. Within the current program structure, examples of possible benefit reductions include modifying the benefit formula, raising the retirement age, and reducing cost-of-living adjustments. Revenue increases might take the form of increases in the payroll tax rate, expanding coverage to include the relatively few workers who are still not covered under Social Security, or allowing the trust funds to be invested in potentially higher-yielding securities such as stocks. Reforms that increase the role of individual retirement savings would also involve Social Security benefit reductions or revenue increases, which might take slightly different forms. For example, such reforms might include Social Security benefit reductions to offset any contributions that are diverted from the current program or permitting workers to invest their retirement savings in stocks. The choice among various benefit reductions and revenue increases will affect the balance between income adequacy and individual equity. Benefit reductions could pose the risk of diminishing adequacy, especially for specific subpopulations. Both benefit reductions and tax increases that have been proposed could diminish individual equity by reducing the implicit rates of return the workers earn on their contributions to the system. In contrast, increasing revenues by investing retirement funds in the stock market could improve rates of return. The choice among various benefit reductions and revenue increases—for example, raising the retirement age—will ultimately determine not just how much income retirees will have but also how long they will be expected to continue working and how long their retirements will be. Reforms will determine how much consumption workers will give up during their working years to provide for more consumption during retirement. Reform proposals have also raised the issue of increasing the degree to which the nation sets aside funds to pay for future Social Security benefits. Advance funding could reduce payroll tax rates in the long term and improve intergenerational equity but would involve significant transition costs. As noted earlier, Social Security is largely financed on a pay-as-you-go basis. In a pure pay-as-you-go arrangement, virtually all revenues come from payroll taxes since trust funds are kept to a relatively small contingency reserve that earns relatively little interest compared with the interest that a fully funded system would earn. In contrast, defined benefit employer pensions are generally fully advance funded. As workers accrue future pension benefit rights, employers make pension fund contributions that are projected to cover them. The pension funds accumulate substantial assets that contribute a large share of national saving. The investment earnings on these funds contribute considerable revenues and reduce the size of pension fund contributions that would otherwise be required to pay pension benefits. Defined contribution pensions and individual retirement savings are fully funded by definition, and investment earnings on these retirement accounts also help provide retirement income. Similarly, Social Security reform proposals that increase the role of individual retirement savings would generally increase advance funding. Advance funding is possible in the public sector simply by collecting more revenue than is necessary to pay current benefits. However, advance funding in the public sector raises issues that prompted the Congress to reject advance funding in designing Social Security. A fully funded Social Security program would have trust funds worth trillions of dollars. If the trust funds were invested in private securities, some people would be concerned about the influence that government could have on the private sector. If these funds were invested only in federal government securities, as is required under current law, taxpayers would eventually pay both interest and principal to the trust funds and ultimately cover the full cost of Social Security benefits. Moreover, the effect of advance funding in the public sector fundamentally depends on whether the government as a whole is increasing national saving, as discussed further below. If Social Security reforms increase the balances in privately held retirement funds, interest on those funds could eventually help finance retirement income and reduce the system’s reliance on Social Security payroll contributions, which in turn would improve individual equity. At the same time, the relatively larger generation of current workers could finance some of their future benefits now rather than leaving a relatively smaller future generation of workers with the entire financing responsibility. In effect, advance funding shifts responsibility for retirement income from the children of one generation of retirees to that retiree generation itself. However, larger payroll contributions would be required in the short term to build up those fund balances. Social Security would still need revenues to pay benefits that retirees and current workers have already been promised. The contributions needed to fund both current and future retirement liabilities would clearly be higher than those currently collected. Thus, increasing advance funding in any form involves substantial transition costs as workers are expected to cover some portion of both the existing unfunded liability and the liability for their own future benefits. Reform proposals handle this transition in a variety of ways, and the transition costs can be spread out across one or several generations. The nature of specific reform proposals will determine the pace at which advance funding is increased. For example, one proposal would increase payroll taxes by 1.52 percent for 72 years to fund the transition and would involve borrowing $2 trillion from the public during the first 40 years of the transition to help cover the unfunded liability. Ideally, Social Security reforms would help address the fundamental economic implications of the demographic trends that underlie Social Security’s financing problems. Although people are living longer and healthier lives, they have also been retiring earlier and have been having smaller families. Unless these patterns change, relatively fewer workers will be producing goods and services for a society with relatively more retirees. Economic growth, and more specifically growth in labor productivity, could help ease the strains of providing for a larger elderly population. Increased investment in physical and human capital should generally increase productivity and economic growth, but investment depends on national saving, which has been at historically low levels. Recognizing these economic fundamentals, proponents of increasing the role of individual retirement savings generally observe that a pay-as-you-go financing structure does little to help national saving, and they argue that the advance funding through individual accounts would increase saving. However, reforms would not produce notable increases in national saving to the extent that workers reduce their other saving in the belief that their new accounts can take its place. Social Security reforms might also increase national saving within the current program structure. Advance funding would increase saving, and it could be applied to government-controlled trust funds as well as to individual accounts. Any additional Social Security savings in the federal budget could add to national saving but only if not offset by deficits in the rest of the federal budget. More broadly, overall federal budget surpluses or deficits affect national saving since they represent saving or dissaving by the government. To the extent that reforms attempt to increase national saving, they will vary by how much emphasis they place on doing so through individual or government saving. That emphasis will reflect not only judgments about which is likely to be more effective but also values regarding the responsibilities of individuals and governments and attitudes toward the national debt. While these points will be much debated, few dispute the need to be aware of the effect of increasing national saving, although it may be hard to achieve. In some form and to varying degrees, every generation of children has supported its parents’ generation in old age. In economic terms, those who do work ultimately produce the goods and services consumed by those who do not. The Social Security system and, more broadly, the nation’s retirement income policies, whatever shape they take, ultimately determine how and to what extent the nation supports the well-being of the elderly. Restoring Social Security’s long-term solvency presents complex and important choices. These choices include how reforms will balance income adequacy and individual equity; how risks are shared as a community or assumed by individuals; how reforms assign roles and responsibilities among government, employers, and individuals; whether retirements will start earlier or later and how large retirement incomes will be; and how much the nation saves and invests in its capacity to produce goods and services. Whatever reforms are adopted will reflect these fundamental choices implicitly, if not explicitly. This concludes my testimony. I would be happy to answer any questions. Social Security Reform: Implications for Women’s Retirement Income (GAO/HEHS-98-42, Dec. 31, 1997). Social Security Reform: Demographic Trends Underlie Long-Term Financing Shortage (GAO/T-HEHS-98-43, Nov. 20, 1997). Budget Issues: Analysis of Long-Term Fiscal Outlook (GAO/AIMD/OCE-98-19, Oct. 22, 1997). 401(k) Pension Plans: Loan Provisions Enhance Participation but May Affect Retirement Income Security for Some (GAO/HEHS-98-5, Oct. 1, 1997). Retirement Income: Implications of Demographic Trends for Social Security and Pension Reform (GAO/HEHS-97-81, July 11, 1997). Social Security Reform: Implications for the Financial Well-Being of Women (GAO/T-HEHS-97-112, Apr. 10, 1997). 401(k) Pension Plans: Many Take Advantage of Opportunity to Ensure Adequate Retirement Income (GAO/HEHS-96-176, Aug. 2, 1996). Social Security: Issues Involving Benefit Equity for Working Women (GAO/HEHS-96-55, Apr. 10, 1996). Federal Pensions: Thrift Savings Plan Has Key Role in Retirement Benefits (GAO/HEHS-96-1, Oct. 19, 1995). Social Security Retirement Accounts (GAO/HEHS-94-226R, Aug. 12, 1994). Social Security: Analysis of a Proposal to Privatize Trust Fund Reserves (GAO/HRD-91-22, Dec. 12, 1990). Social Security: The Trust Fund Reserve Accumulation, the Economy, and the Federal Budget (GAO/HRD-89-44, Jan. 19, 1989). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the goals of the social security program and the difficult choices that restoring its long-term solvency will require, focusing on: (1) balancing income adequacy and individual equity; (2) determining who bears risks and responsibilities; (3) choosing among various benefit reductions and revenue increases; (4) using pay-as-you-go or advance funding; and (5) deciding how much to save and invest in the nation's productive capacity. GAO noted that: (1) helping ensure adequate retirement income has been a fundamental goal of social security; (2) virtually all reform proposals also pay some attention to income adequacy, but some place a different emphasis on it relative to the goal of individual equity, which seeks to ensure that benefits bear some relationship to contributions; (3) some proponents of reform believe that increasing the role of individual retirement savings could improve individual equity without diminishing income adequacy; (4) the balance between income adequacy and individual equity also influences how much risk and responsibility are borne by individuals and the government; (5) workers face a variety of risks regarding their retirement income security; (6) no matter what shape social security reform takes, restoring long-term solvency will require some combination of benefit reductions and revenue increases; (7) revenue increases might take the form of increases in payroll tax rate, expanding coverage to include the relatively few workers who are still not covered under social security, or allowing the trust funds to be invested in potentially higher-yielding securities such as stocks; (8) reforms that increase the role of individual retirement savings would also involve social security benefit reductions or revenue increases, which might take slightly different forms; (9) reform proposals have also raised the issue of increasing the degree to which the nation sets aside funds to pay for future social security benefits; (10) advanced funding could reduce payroll tax rates in the long term and improve intergenerational equity but would involve significant transition costs; (11) in a pure pay-as-you-go arrangement, virtually all revenues come from payroll taxes since trust funds are kept to a relatively small contingency reserve that earns relatively little interest compared with the interest that a fully funded system would earn; (12) in contrast, defined benefit employer pensions are generally fully advance funded; (13) ideally, social security reforms would help address the fundamental economic implications of the demographic trends that underlie social security's financing problems; (14) economic growth, and more specifically growth in labor productivity, could help ease the strains of providing for a larger elderly population; and (15) increased investment in physical and human capital should generally increase productivity and economic growth, but investment depends on national saving, which has been at historically low levels.
AMC, located at Scott Air Force Base, Illinois, is responsible for providing strategic airlift, including air refueling, special air missions, and aeromedical evacuation. As part of that mission, AMC is responsible for tasking 67 C-5 aircraft: 35 stationed at Travis Air Force Base in California and 32 stationed at Dover Air Force Base in Delaware. Unlike other Air Force aircraft, the C-5 is rarely deployed for more than 30 days, since it is primarily used to move cargo from the United States to locations worldwide. As a result, C-5 aircrews are deployed away from home for several weeks and then return to their home station. Other Air Force aircraft, such as the KC-10, can carry cargo but are primarily used to refuel other aircraft and can be deployed to locations around the world for extended periods of time. Since September 11, 2001, C-5 aircrews have been deployed for periods of time less than 30-days, generally ranging from 7 to 24 days. Known for its ability to carry oversized and heavy loads, the C-5 can transport a wide variety of cargo, including helicopters and Abrams M1A1 Tanks to destinations worldwide. Recently, the C-5’s have been used for a variety of missions, including: support of presidential travel, contracted movement of materials by other government organizations, training missions, and support of operations Enduring and Iraqi Freedom. In addition, the C-5 can also transport about 70 passengers. The aircrew for a C-5 is comprised of two pilots, a flight engineer, and two loadmasters. At Travis Air Force Base there are 439 active duty and 383 reserve aircrew members. At Dover Air Force Base there are 650 active duty and 344 reserve aircrew members. Within the Office of the Secretary of Defense (OSD), the Under Secretary of Defense (Personnel and Readiness) is responsible for DOD personnel policy, including oversight of military compensation. The Under Secretary of Defense (Personnel and Readiness) leads the Unified Legislation and Budgeting process, established in 1994 to develop and review personnel compensation proposals. As part of this process, the Under Secretary of Defense (Personnel and Readiness) chairs biannual meetings, attended by the principal voting members from the Office of the Under Secretary of Defense (Personnel and Readiness), including the Principle Deputy Under Secretary of Defense (Personnel and Readiness), the Assistant Secretary of Defense (Reserve Affairs), the Assistant Secretary of Defense (Health Affairs), the Office of the Under Secretary of Defense (Comptroller), the Joint Staff, and the services’ Assistant Secretaries for Manpower and Reserve Affairs. In 1963, Congress established the $30-per-month family separation allowance to help offset the additional expenses incurred by the dependents of servicemembers who are away from their permanent duty station for more than 30 consecutive days. According to statements made by members of Congress during consideration of the legislation establishing the family separation allowance, additional expenses could stem from costs associated with home repairs, automobile maintenance, and childcare that could not be performed by the deployed servicemember. Over the years, the eligibility requirements for the family separation allowance have changed. For example, while the family separation allowance was initially authorized for enlisted members in pay grades E-5 and above as well as to enlisted members in pay grade E-4 with 4 years of service, today the family separation allowance is authorized for servicemembers in all pay grades at a flat rate of $250 per month. Servicemembers must apply for the family separation allowance, certifying their eligibility to receive the allowance. The rationale for establishing the 30-day threshold is unknown. However, DOD officials noted that servicemembers deployed for more than 30 days do not have the same opportunities to minimize household expenses as those who are deployed for less than 30 days. For example, servicemembers who are able to return to their permanent duty locations may perform home repairs and do not have to pay someone to do these tasks for them. The 1963 family separation allowance legislation was divided into two subsections, one associated with overseas duty and one associated with any travel away from the servicemembers home station. The first subsection was intended to compensate servicemembers who were permanently stationed overseas and were not authorized to bring dependents. The second subsection was intended to compensate servicemembers for added expenses associated with their absence from their dependents and permanent duty station for extended periods of time regardless of whether the members were deployed domestically or overseas. Originally, this aspect of family separation compensation was also to be based on the allowance for living quarters. At that time, members would receive one-third the allowance for living quarters or a flat rate of $30 per month, whichever amount was larger. In July of 1963, the Senate heard testimony from DOD officials who generally agreed with the proposed legislation, but raised concerns about using the allowance for living quarters as a baseline. Their concerns were related to the complexity of determining the payments and the inequities associated with tying payment to rank. Ultimately, DOD proposed and the Congress accepted a flat rate of $30 per month for eligible personnel. DOD has not identified frequent short-term deployments less than 30-days as a family separation allowance issue. No proposals seeking modifications to the family separation allowance because of frequent short-term deployments have been provided to DOD for consideration as part of DOD’s Unified Legislation and Budgeting process, which reviews personnel compensation proposals. Since 1994, a few proposals have been made seeking changes to allowance amounts and eligibility requirements. None of the proposals sought to change the 30-day eligibility threshold. Further, our discussions with OSD, service, and reserve officials did not reveal any concerns related to frequent short-term deployments and the family separation allowance. To analyze concerns that might be raised by those experiencing frequent short-term deployments, we conducted group discussions with Air Force strategic C-5 airlift aircrews at Travis Air Force Base, which we identified as an example of servicemembers who generally deploy for periods less than 30 days. We did not identify any specific concerns regarding compensation received as a result of short-term deployments. We found that the C-5 aircrews were generally more concerned about the high pace of operations and associated unpredictability of their schedules, due to the negative impact on their quality of life, than about qualifying for the family separation allowance. DOD has proposed few changes to the amount of the family separation allowance and no proposals have been submitted to alter the 30-day eligibility threshold. Our review of proposals submitted through DOD’s Unified Legislation and Budgeting process revealed that DOD has considered one proposal to change the amount of the monthly family separation allowance since 1994. In 1997, an increase in the family separation allowance from $75 to $120 was proposed. This provision was not approved by DOD. Since 1994, three modifications to the eligibility criteria have also been proposed. In 1994, a proposal was made to allow payment of the family separation allowance for members embarked on board a ship or on temporary duty for 30 consecutive days, whose family members were authorized to accompany the member but voluntarily chose not to do so. The proposal was endorsed by DOD and accepted by Congress. In 2001, DOD considered but ultimately rejected a similar proposal that would have applied to all members who elect to serve an unaccompanied tour of duty. The third proposal sought to modify the use of family separation allowance for joint military couples (i.e. one military member married to another military member). According to a DOD official, while this proposal was not endorsed by DOD, Congress ultimately passed legislation that clarified the use of family separation allowance for joint military couples. The family separation allowance is now payable to joint military couples, provided the members were residing together immediately before being separated by reason of their military orders. Although both may qualify for the allowance, only one monthly allowance may be paid to a joint military couple during a given month. If both members were to receive orders requiring departure on the same day, then payment would be made to the senior member. Overall, C-5 aircrew members and aircrew leadership with whom we met noted that the unpredictability of missions was having more of an adverse impact on crewmembers’ quality of life than the compensation they receive as a result of their deployments. For example, several aircrew members at Travis Air Force Base indicated that over the past two years, they have been called up on very short advance notice, as little as 12 hours, and sent on missions lasting several weeks, making it difficult to conduct personal business or make plans with their families. According to the aircrew members and both officer and enlisted leadership with whom we met, the unpredictability of their missions is expected to continue for the foreseeable future due to the global war on terrorism. Officials informed us that the average number of days by month that aircrew members have been deployed has increased since September 11, 2001, with periods of higher activity, or surges. For example, as shown in figure 1, the average number of days in September 2001 that AMC C-5 co-pilots were deployed was 9. Since then, the average number of days by month that C-5 co-pilots were deployed has fluctuated between 12 and 19. Prior to September 2001, available data shows a low monthly average of 5 days in January 2001. While the average number of days deployed has fluctuated, aircrew members expressed concern about the intermittent suspension of pre- and post-mission crew rest periods that have coincided with increased operations. Generally, these periods have been intended to ensure that aircrew members have enough rest prior to flying another mission. However, aircrew members noted that crew rest periods also allow them to perform other assigned duties and spend time with their families. During our discussion-group meetings, aircrew members indicated that the rest period after a mission had been reduced from as much as 4 days to as little as 12 hours due to operational needs. In addition to basic compensation, DOD has several special pays and allowances available to further compensate servicemembers deployed for less than 30 days. Servicemembers who are deployed domestically or overseas for less than 30 days may be eligible to receive regular per diem. The per diem amount varies depending upon location. Servicemembers also may be eligible to receive other pays and allowances, such as hazardous duty pay, mission-oriented hardship duty pay, and combat-zone tax exclusions. However, DOD has not implemented one special allowance designed, in part, to compensate those frequently deployed for short periods. Congress supported DOD’s legislative proposal to authorize a monthly high-deployment allowance with passage of the National Defense Authorization Act for Fiscal Year 2004. This provision allows the services to compensate their members for lengthy deployments as well as frequent shorter deployments. However, DOD has not set a timetable for establishing criteria to implement this allowance. In addition to basic military pay, servicemembers who are deployed for less than 30 days may also be eligible to receive regular per diem, other special pays and allowances, and tax exclusions (see table 1). When servicemembers are performing temporary duty away from their permanent duty station, they are entitled to per diem, which provides reimbursement for meals, incidental expenses, and lodging. To be eligible for per diem, servicemembers must perform temporary duty for more than 12 hours at a location to receive any of the per diem rate for that location. The per diem rates are established by: the General Services Administration, the State Department, and DOD’s Per Diem, Travel, and Transportation Allowance Committee. The rates range from $86 to $284 per day within the continental United States and from $20 to $533 per day when outside the continental United States, depending on whether government meals and lodging are provided. Aircrews can earn various per diem rates during the course of their travel. For example, a typical two-week mission for Travis C-5 aircrew members would take them to Dover Air Force Base, then to Moron, Spain, and then to Baghdad, Iraq. At each of these locations, the aircrews can spend a night allowing them to accrue applicable per diem rates for that location. According to the Air Force, per diem rates for a typical C-5 mission are as follows: $157 for Dover Air Force Base; $235 for Moron, Spain; and $154 for Baghdad, Iraq. In some cases, aircrews may receive a standard $3.50 per day for incidental expenses, when they are at locations where the government can provide meals and lodging. This is the standard per diem rate used to compensate servicemembers traveling outside of the continental United States when the government can provide lodging and meals. Hostile Fire and Imminent Danger Pay are pays that provide additional compensation for duty performed in designated areas where the servicemembers are subject to hostile fire or imminent danger. Both pays are derived from the same statue and cannot be collected simultaneously. Servicemembers are entitled to hostile fire pay, an event-based pay, if they are (1) subjected to hostile fire or explosion of hostile mines; (2) on duty in an area close to a hostile fire incident and in danger of being exposed to the same dangers actually experienced by other servicemembers subjected to hostile fire or explosion of hostile mines; or (3) killed, injured, or wounded by hostile fire, explosion of a hostile mine, or any other hostile action. Imminent danger pay is a threat based pay intended to compensate servicemembers in specifically designated locations, which pose a threat of physical harm or imminent danger due to civil insurrection, civil war, terrorism, or wartime conditions. To be eligible for this pay in a month, servicemembers must have served some time, one day or less, in one of the designated zones during the month. The authorized amount for hostile fire and imminent danger pay is $150 per month, although the fiscal year 2003 Emergency Wartime Supplemental Appropriations Act temporarily increased the amount to $225 per month. If Congress takes no further action, the rate will revert to $150 per month in January 2005. Mission-oriented hardship duty pay compensates military personnel for duties designated by the Secretary of Defense as hardship duty due to the arduousness of the mission. Mission-oriented hardship duty pay is payable at a monthly rate up to $300, without prorating or reduction, when the member performs the specified mission during any part of the month. DOD has established that this pay be paid at a flat monthly rate of $150 per month. Active and Reserve component members who qualify, at any time during the month, receive the full monthly mission-oriented hardship duty pay, regardless of the period of time on active duty or the number of days they receive basic pay during the month. This pay is currently only available to servicemembers assigned to, on temporary duty with, or otherwise under the Defense Prisoner of War/Missing Personnel Office, the Joint Task Force-Full Accounting, or Central Identification Lab-Hawaii. Hardship duty includes missions such as locating and recovering the remains of U.S. servicemembers from remote, isolated areas including, but not limited to, areas in Laos, Cambodia, Vietnam, and North Korea. The combat-zone tax exclusion provides exclusion from federal income tax, as well as income tax in many states, to servicemembers serving in a presidentially designated combat zone or in a statutorily established hazardous duty area for any period of time. For example, although the C-5 aircrews at Travis and Dover Air Force Bases do not serve in a designated combat zone for an extended period of time, many of the missions that they fly may be within areas designated for combat-zone tax exclusion eligibility. Enlisted personnel and warrant officers may exclude all military compensation earned in the month in which they perform active military service in a combat-zone or qualified hazardous duty area for active military service from federal income tax. For commissioned officers, compensation is free of federal income tax up to the maximum amount of enlisted basic pay plus any imminent danger pay received. DOD has not established criteria defining what constitutes frequent deployments, nor has it determined eligibility requirements in order to implement the high deployment allowance. DOD sought significant modifications to high deployment compensation through a legislative proposal to the National Defense Authorization Act for Fiscal Year 2004. Congress had established a high deployment per diem as part of the National Defense Authorization Act for Fiscal Year 2000. Pursuant to statutorily granted authority, on October 8, 2001, DOD waived application of the high deployment compensation in light of the ongoing military response to the terrorist attacks on September 11, 2001. After implementing the waiver authority, DOD sought legislative changes to the high deployment compensation in an effort to better manage deployments. DOD’s proposal sought, among other things, to: (1) change high- deployment compensation from a per diem rate to a monthly allowance, (2) reduce the dollar amount paid so that it was more in line with other special pays (e.g. hostile fire pay), and (3) allow DOD to recognize lengthy deployments as well as frequent deployments. The National Defense Authorization Act for Fiscal Year 2004 reflects many of DOD’s proposed changes. The act changed the $100 per diem payment into an allowance not to exceed $1,000 per month. To help compensate those servicemembers who are frequently deployed, the act established a cumulative 2-year eligibility threshold not to exceed 401 days. Also, the act provided the Secretary of Defense with the authority to prescribe a cumulative threshold lower than 401 days. Depending upon where the Secretary of Defense establishes the cumulative threshold, servicemembers, such as the C-5 aircrews, serving multiple short-term deployments may be compensated through the high deployment allowance. Once a servicemembers’ deployments exceed the established cumulative day threshold for the number of days deployed, the member is to be paid a monthly allowance not to exceed $1,000 per month at the beginning of the following month. From that point forward, the servicemember will continue to qualify for the allowance as long as the total number of days deployed during the previous 2-year period exceeds the cumulative threshold established by the Secretary of Defense. The high deployment allowance is in addition to per diem and any other special pays and allowances for which the servicemember might qualify. Moreover, the servicemember does not have to apply for the allowance, as the act mandated that DOD track and monitor days deployed and make payment accordingly. Finally, DOD may exclude specified duty assignments from eligibility for the high deployment allowance (e.g., sports teams or senior officers). According to DOD officials, this provision also provides additional flexibility in targeting the allowance to selected occupational specialties, by allowing DOD to exclude all occupations except those that it wishes to target for additional compensation because of retention concerns. The Senate report accompanying the bill that amended the high deployment provision encouraged DOD to promptly implement these changes. However, DOD officials told us that a timetable for establishing the criteria necessary to implement the high deployment allowance has not been set. Although we could not ascertain exactly why DOD had not taken action to implement the high deployment allowance, OSD officials informed us that the services had difficulty reaching agreement on what constitutes a deployment for purposes of the high deployment payment. The family separation allowance is directed at enlisted servicemembers and officers whose dependents incur extra expenses when the servicemember is deployed for more than 30 consecutive days. We found no reason to question the eligibility requirements that have been established for DOD’s family separation allowance. We believe that no basis exists to change the 30-day threshold, as a problem has not been identified with the family separation allowance. Further, servicemembers who deploy for less than 30 days may be eligible to receive additional forms of compensation resulting from their deployment, such as per diem, other special pays and allowances, and tax exclusions. Since the terrorists’ attacks on September 11, 2001, some servicemembers have experienced more short-term deployments. Given the long-term nature of the global war on terrorism, this increase in the frequency of short-term deployments is expected to continue for the foreseeable future. DOD will need to assure adequate compensation for servicemembers using all available special pays and allowances in addition to basic pay. While the aircrews with whom we met did not express specific concerns about compensation, they, like other servicemembers, are concerned about quality-of-life issues. The high deployment allowance could help to address such issues for servicemembers, while helping to mitigate DOD’s possible long-term retention concerns. Also, unlike the family separation allowance, the high deployment allowance could be used to compensate servicemembers regardless of whether or not they have dependents. Although the Senate report accompanying the bill that amended the high deployment provision encouraged DOD to promptly implement these changes, the Secretary of Defense has not taken action to implement the high deployment allowance. We recommend that the Secretary of Defense direct the Deputy Undersecretary of Defense (Personnel and Readiness), in concert with the Service Secretaries and the Commandant of the Marine Corps, to take the following three actions set a timetable for establishing criteria to implement the high deployment define, as part of the criteria, what constitutes frequent short-term deployments within the context of the cumulative day requirement as stated in the high deployment allowance legislation; and determine, as part of the criteria, eligibility requirements targeting the high deployment allowance to selected occupational specialties. In written comments on a draft of this report, DOD partially concurred with our recommendations that it set a timetable for establishing criteria to implement the high deployment allowance and what the criteria should include. While DOD agreed that servicemembers should be recognized with additional pay for excessive deployments, it stated that DOD has not implemented the high deployment allowance because it views the high deployment allowance as a peacetime authority. Further, DOD stated that since we are in a wartime posture, it is more difficult to control the pace of deployments than during peacetime. DOD’s response noted that it has elected to exercise the waiver given to it by Congress to suspend the entitlement for reasons of national security. DOD also noted that it has encouraged the use of other flexible pay authorities to compensate servicemembers who are away from home for inordinate periods. Finally, DOD stated that it would reassess the use of the high deployment allowance at some point in the future. We do not believe that the nation’s current wartime situation prevents DOD from taking our recommended actions. The first recommended action being to set a timetable for establishing criteria to implement the high deployment allowance. We did recognize in our report that pursuant to statutorily granted authority, on October 8, 2001, DOD waived application of the high deployment allowance in light of the ongoing military response to the terrorist attacks on September 11, 2001. However, since then, DOD sought modifications through a legislative proposal to the National Defense Authorization Act for Fiscal Year 2004 for more flexibilities to manage the high deployment compensation better. These additional flexibilities include providing DOD with the opportunity to tailor the allowance to meet current or expected needs. Since the purpose of special pays and allowances is primarily to help retain more servicemembers, the high deployment allowance could be used as another compensation tool to help retain servicemembers during a time of war. As our report clearly states, given the expectations for a long-term commitment to the war on terrorism, developing the criteria for implementing the high deployment allowance would provide DOD with an additional option for compensating those military personnel who are frequently deployed for short periods of time. Regarding DOD’s use of other flexible pay authorities to compensate servicemembers who are away from home for inordinate periods, the examples DOD cites for lengthy or protracted deployments in Iraq, Afghanistan, and Korea are not applicable to those servicemembers deployed for less than 30 days, the focus of this review. Finally, the vagueness of when and how the high deployment allowance will be implemented runs contrary to the congressional direction, which encouraged DOD to promptly implement the new high deployment allowance authority. Based on DOD’s response, it is not clear when DOD intends to develop criteria to implement the high deployment allowance. We recommended that DOD set a timetable for establishing criteria to implement the high deployment allowance, not that DOD implement the allowance immediately. We believe that this recommendation is warranted, since establishing the criteria will make it possible for DOD to implement the high deployment allowance quickly, whenever it is deemed appropriate and necessary. DOD’s comments are reprinted in their entirety in appendix I. DOD also provided technical comments, which we have incorporated as appropriate. To assess the rationale for family separation allowance eligibility requirements, including the rationale for the 30-day threshold, we reviewed the legislative history concerning the family separation allowance and analyzed DOD policies implementing this pay. We also interviewed officials in the offices of the Under Secretary of Defense (Personnel and Readiness); the Secretaries of the Army, Navy, and Air Force; the Commandant of the Marine Corps; the Air National Guard; and the Air Force Reserve. To determine the extent to which DOD had identified frequent short-term deployments as a family separation allowance issue, we reviewed proposals submitted through DOD’s Unified Legislation and Budgeting process. We met with compensation representatives from the Office of the Under Secretary of Defense (Personnel and Readiness) and each of the services. We interviewed officials with the Defense Manpower Data Center and the Defense Finance and Accounting Service. We sought to use DOD’s database for tracking and monitoring deployments to determine the extent of servicemembers experiencing frequent deployments lasting less than 30 days. We were not able to use the database for the purposes of our report to discern the number of deployments by location lasting less than 30 days, since more than 40 percent of the data for location was not included in the database. In addition, the database did not contain information related to some types of non-deployment activities (e.g. training), which we deemed important to our review. We focused our study on the Air Force since the fiscal year 2003 Secretary of Defense Annual Report to the President and Congress showed that the Air force was the only service whose members were deployed less than 30 days on average in fiscal year 2002. Further, through discussions with Air Force officials we identified strategic aircrews managed by the AMC as examples of those who would most likely be experiencing short-term deployments. We visited AMC, where we met with officials from personnel, operations, finance, and the tactical airlift command center. To understand the views of one group of short-term deployers, we visited Travis Air Force Base in California where we met with officer and enlisted leadership for the C-5 and KC-10 aircraft. We held discussion groups with 12 officers and 12 enlisted aircrew members from both aircraft, for a total of 48 aircrew members. We visited Dover Air Force Base in Delaware where we met with C-5 officer and enlisted leadership. We also met with officials representing personnel, operations, and finance offices at both Travis and Dover Air Force Bases. We assessed the reliability of AMC C-5 copilot deployment data, as well as data contained in the fiscal year 2003 Secretary of Defense Annual Report to the President and Congress. GAO’s assessment consisted of (1) reviewing existing information about the data and the systems that produced them, (2) examining the electronic data for completeness, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. To assess what special pays and allowances are available, in addition to basic compensation, to further compensate servicemembers deployed for less than 30 days, we identified special pays and allowances that do not have a time eligibility factor through the DOD’s Military Compensation Background Papers, legislative research, and discussions with OSD officials. We reviewed the legislative history regarding recent legislative changes to special pays and allowances and how DOD has implemented these changes. We are sending copies of this report to the Secretary of Defense; the Under Secretary of Defense (Personnel and Readiness); the Secretaries of the Army, the Air Force, and the Navy; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. We will also make copies available to appropriate congressional committees and to other interested parties on request. In addition, the report will be available at no charge at the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please call me at (202) 512-5559 or Brenda S. Farrell at (202) 512-3604. Major contributors to this report were Aaron M. Adams, Kurt A. Burgeson, Ann M. Dubois, Kenya R. Jones, and Ronald La Due Lake. 1. The purpose of our congressionally directed review was to assess the special pays and allowances available to DOD that could be used to compensate servicemembers who are frequently deployed for less than 30 days. Consequently, our scope did not include an assessment of compensation for servicemembers serving lengthy, or protracted, deployments of 30 days or more. We found that DOD has available and is using several special pays and allowances, in addition to basic compensation, to further compensate servicemembers deployed for less than 30 days. However, we also found that DOD has one special allowance, the high deployment allowance, that is not available to provide further compensation to servicemembers who frequently deploy for less than 30 days and that DOD has not set a timetable to establish criteria to implement the allowance. During our review, we could not ascertain exactly why DOD had not taken action to develop criteria for implementing the high deployment allowance. During several discussions, OSD officials stated that the services had difficulty reaching agreement on what constitutes a deployment for the purposes of the high deployment payment. DOD’s response to our draft report noted that it has elected to exercise the waiver given to it by Congress to suspend the high deployment allowance for reasons of national security. We recognized this waiver in our report. We also noted that after DOD waived application of the high deployment payment on October 8, 2001, DOD sought legislative modifications of the high deployment payment that would give it more flexibilities to better manage this type of compensation. Congress granted DOD these flexibilities and encouraged DOD to promptly implement these changes. As noted in our report, given the expectations for a long-term commitment to the war on terrorism, developing the criteria for implementing the new high deployment allowance would provide DOD with an additional option for compensating those military personnel who are frequently deployed for short periods of time. Also, the high deployment allowance, unlike the family separation allowance, could be used to compensate servicemembers regardless of whether or not they have dependents and thus would reach more servicemembers. Military Personnel: Active Duty Compensation and Its Tax Treatment. GAO-04-721R. Washington, D.C.: May 7, 2004. Military Personnel: Observations Related to Reserve Compensation, Selective Reenlistment Bonuses, and Mail Delivery to Deployed Troops. GAO-04-582T. Washington, D.C.: March 24, 2004. Military Personnel: Bankruptcy Filings among Active Duty Service Members. GAO-04-465R. Washington, D.C.: February 27, 2004. Military Pay: Army National Guard Personnel Mobilized to Active Duty Experienced Significant Pay Problems. GAO-04-89. Washington, D.C.: November 13, 2003. Military Personnel: DOD Needs More Effective Controls to Better Assess the Progress of the Selective Reenlistment Bonus Program. GAO-04-86. Washington, D.C.: November 13, 2003. Military Personnel: DFAS Has Not Met All Information Technology Requirements for Its New Pay System. GAO-04-149R. Washington, D.C.: October 20, 2003. Military Personnel: DOD Needs More Data to Address Financial and Health Care Issues Affecting Reservists. GAO-03-1004. Washington, D.C.: September 10, 2003. Military Personnel: DOD Needs to Assess Certain Factors in Determining Whether Hazardous Duty Pay Is Warranted for Duty in the Polar Regions. GAO-03-554. Washington, D.C.: April 29, 2003. Military Personnel: Preliminary Observations Related to Income, Benefits, and Employer Support for Reservists During Mobilizations. GAO-03-573T. Washington, D.C.: March 19, 2003. Military Personnel: Oversight Process Needed to Help Maintain Momentum of DOD’s Strategic Human Capital Planning. GAO-03-237. Washington, D.C.: December 5, 2002. Military Personnel: Management and Oversight of Selective Reenlistment Bonus Program Needs Improvement. GAO-03-149. Washington, D.C.: November 25, 2002. Military Personnel: Active Duty Benefits Reflect Changing Demographics, but Opportunities Exist to Improve. GAO-02-935. Washington, D.C.: September 18, 2002. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The fiscal year 2004 National Defense Authorization Act directed GAO to assess the special pays and allowances for servicemembers who are frequently deployed for less than 30 days, and to specifically review the family separation allowance. GAO's objectives were to assess (1) the rationale for the family separation allowance eligibility requirements, including the required duration of more than 30 consecutive days away from a member's duty station; (2) the extent to which DOD has identified short-term deployments as a family separation allowance issue; and (3) what special pays and allowances, in addition to basic military compensation, are available to compensate members deployed for less than 30 days. In 1963, Congress established the family separation allowance to help offset the additional expenses that may be incurred by the dependents of servicemembers who are away from their permanent duty station for more than 30 consecutive days. Additional expenses may include the costs associated with home repairs, automobile maintenance, and childcare that could have been performed by the deployed servicemember. Over the years, the eligibility requirements for the family separation allowance have changed. Today, the family separation allowance is authorized for officers and enlisted in all pay grades at a flat rate. The rationale for establishing the 30-day threshold is unknown. DOD has not identified frequent short-term deployments as a family separation allowance issue. No proposals seeking modifications to the family separation allowance because of frequent short-term deployments have been provided to DOD for consideration as part of DOD's Unified Legislation and Budgeting process, which reviews personnel pay proposals. Further, DOD officials were not aware of any specific concerns that have been raised by frequently deployed servicemembers about their eligibility to receive the family separation allowance. Based on group discussions with Air Force strategic airlift aircrews, who were identified as examples of those most likely to be experiencing short-term deployments, we did not identify any specific concerns regarding the lack of family separation allowance compensation associated with short-term deployments. Rather, many aircrew members indicated the high pace of operations and associated unpredictability of their schedules was a greater concern due to the negative impact on their quality of life. In addition to basic military compensation, DOD has several special pays and allowances to further compensate servicemembers deployed for short periods. Servicemembers who are deployed for less than 30 days may be eligible to receive regular per diem. The per diem amount varies depending upon location. For example, these rates range from $86 to $284 per day within the United States and from $20 to $533 per day when outside the United States. However, DOD has not implemented the high deployment allowance designed, in part, to compensate those frequently deployed for shorter periods. Congress supported DOD's legislative proposal to authorize a monthly high deployment allowance. This allowance permits the services to compensate members for lengthy as well as frequent shorter deployments. The most recent amendment to this provision provides DOD with the authority to adjust a cumulative day threshold to help compensate servicemembers experiencing frequent short deployments. DOD has flexibility to exclude all occupations except those that it wishes to target for additional pay. However, DOD has not established criteria to implement this allowance, nor has DOD set a timetable for establishing such criteria.
A military medical surveillance system that collects, analyzes, and disseminates health information facilitates DOD’s ability to intervene in a timely manner to address health care problems experienced by military personnel. DOD believes such a system is one of the principal means to ensure a fit and healthy force and to prevent disease and injuries from degrading warfighting capabilities. Based on our review of the Presidential Advisory Committee and the Institute of Medicine reports and discussions with DOD officials, for the purposes of this report we identified four major elements of a military medical surveillance system, as shown in table 1. The Presidential Advisory Committee and the Institute of Medicine investigations into the causes of illnesses experienced by Gulf War veterans confirmed the need for effective medical surveillance capabilities. Research efforts to determine the causes of what has become known as veterans’ Gulf War illnesses have been hampered due to incomplete medical surveillance data on (1) the names and locations of personnel deployed to the Persian Gulf, (2) exposure of personnel to environmental health hazards, (3) changes in the health status of personnel deployed in the theater, and (4) records of immunizations and other health services provided to the individuals while deployed. In essence, the data available were poorly suited to support epidemiologicaland health outcome studies related to veterans’ Gulf War illnesses. For over 2 years, DOD officials have been working to develop a DOD-wide joint medical surveillance directive and instruction that establish policy and assign responsibility for improving DOD’s medical surveillance for deployments. The intent of the policy is to expand the concept of medical surveillance during deployments to a more comprehensive approach for monitoring and assessing the health consequences related to servicemembers’ participation in deployments. We reviewed this draft policy and found that it addresses the types of medical surveillance problems experienced during the Gulf War—the lack of personnel deployment information and medical assessments, the failure to monitor environmental and disease health threats, and the failure to meet record-keeping requirements. Specifically, the draft policy instruction assigns responsibilities as follows: Assigns to the Defense Manpower Data Center (DMDC) the responsibility for collecting and maintaining information, available for dissemination on a daily basis, on each servicemember deployed to a theater, the length of time the servicemember was deployed, and the exact location within the theater of that member’s unit. Specifies that the Commander in Chief (CINC) and the Joint Task Force (JTF) Surgeon deploy technically specialized units with the capability and expertise required to identify infectious and environmental diseases, make health hazard assessments, and do advanced diagnostic testing. Requires the military services and the CINCs to conduct predeployment medical assessments, to include assessing mental health and drawing blood samples. Requires the CINC Surgeon and the JTF Surgeon to conduct postdeployment medical assessments at the time of redeployment or within 30 days of final departure, to include assessing mental health and drawing blood samples. For both the predeployment and the postdeployment medical assessments, the policy calls for the assessment forms to be forwarded to a single office within DOD for centralized collection purposes and to allow future analyses. Directs the CINC Surgeon and the JTF Surgeon to ensure that medical records are accurately kept and health-related events are documented during deployment. Specifically suggested are records of predeployment and postdeployment assessments and all health interventions (which would include all immunizations). The draft directive and implementing instruction are currently under review by various offices within DOD. DOD officials expect the directive and instruction to be issued by September 1997. The responsible offices are required to develop the necessary implementing documents within 180 days of the directive’s effective date. While DOD was still developing its joint medical surveillance policy for deployments, the Assistant Secretary of Defense for Health Affairs issued, in January 1996, a medical surveillance plan for U.S. forces deploying to Bosnia-Herzegovina, Croatia, and Hungary under Operation Joint Endeavor. This medical surveillance plan encompassed the concepts under consideration in the draft joint policy, was developed by a triservice working group, and was coordinated by the Joint Staff with the services. It was designed to reflect the lessons learned from the Gulf War and to address the potential health risks in the Bosnian theater. According to DOD officials, this DOD-wide, centrally managed medical surveillance plan was the first DOD had developed for a deployment of U.S. forces. The strategy for implementing the plan was determined by the service Surgeons General, the Joint Staff, and the European Command Surgeon. Using the four major elements of a military medical surveillance system described earlier, we examined DOD’s and the services’ implementation of the Operation Joint Endeavor medical surveillance plan. The ability to identify the population at risk is an essential part of an effective military medical surveillance system. It is important to know which servicemembers deployed to the theater and where they were located within the theater during the deployment. This information is needed to facilitate monitoring and analysis of how changes in the servicemembers’ health status is related to various environmental, biological, chemical, or other health threats. Our review indicated that DOD continues to experience problems with its capability to track the population at risk during deployments. In researching the Persian Gulf War illnesses, the Institute of Medicine and the Presidential Advisory Committee reported that inaccurate information on the location of servicemembers in the theater presented problems in identifying exposures to various health threats. Both recommended that DOD improve its ability to track the location of units in the theater. DOD established systems to identify the location of units during the Gulf War; however, the research groups reported that their use for epidemiological studies was limited because the systems did not provide information at the individual servicemember level. During the Gulf War, servicemembers frequently did not remain with their units. DOD established a system, used in Operation Joint Endeavor, to identify which servicemembers deployed to the theater. The services are required to supply deployment data to the DMDC in Monterey, California, which is responsible for maintaining a database on those servicemembers who are deployed. In determining the extent to which the services had done the required postdeployment medical assessments, we used the Army’s deployment data and did not find any errors about which servicemembers had deployed. However, DOD officials expressed their concerns about the accuracy of the deployment database for Air Force and Navy personnel. Air Force officials told us that the Air Force had supplied information to DMDC on servicemembers it planned to deploy. These servicemembers were added to the DMDC database, but many never actually deployed. We were also told that the Navy’s personnel deployment data were inaccurate because elements of two construction battalions (at least 200 servicemembers) that deployed to Operation Joint Endeavor do not appear in the DMDC database. DOD officials told us that they have also frequently heard concerns about the accuracy of the deployment database and met in mid-March 1997 with representatives from the services, DMDC, and other offices to discuss ways to correct the problems. While the DMDC database provides information on which units and which personnel within those units deploy to a theater, DOD has not yet developed a system for accurately tracking the movement of individual servicemembers in units within the theater. This capability is important for accurately identifying exposures of servicemembers to health hazards in the theater. A military medical surveillance program should contain mechanisms for identifying the potential health and environmental hazards that deploying troops will encounter in the theater. Such information can then be used to develop effective preventive countermeasures and identify those exposed to these threats. During the Gulf War, DOD did little prospective monitoring of environmental health threats in the theater and had no systematic means of tracking and centrally reporting the occurrence of diseases and nonbattle injuries during the war. In its 1996 report, the Institute of Medicine recommended that, in preparing for deployments, DOD should monitor the environment for possible health threats and prepare for rapid response and investigation and collect accurate data on exposures to those threats in the theater of operations. Prior to deployments, DOD identifies diseases/illnesses common to the environment in the theater and informs medical personnel and deploying troops on ways to avoid or protect themselves from these diseases/illnesses. According to DOD officials, a predeployment assessment of potential health hazards in the Operation Joint Endeavor theater indicated that diseases such as tick-borne encephalitis, hemorrhagic fever, typhus, and lyme disease could be problems. A tick-borne encephalitis vaccine was offered to those military personnel who might be in danger of contracting the disease because of their proximity to ticks. In addition, troops were advised on ways to best protect themselves from the other diseases, and medical personnel were instructed to be particularly alert for symptoms that might indicate that a servicemember had one of the diseases/conditions. Of the potential diseases/illnesses identified, only one case of hemorrhagic fever was diagnosed, and the patient was successfully treated. The establishment in 1994 of the U.S. Army Center for Health Promotion and Preventive Medicine (USACHPPM) has been a major enhancement to DOD’s ability to perform environmental monitoring and tracking since the Gulf War. This capability was augmented in October 1995 with the establishment of the 520th Theater Army Medical Laboratory. This laboratory is a deployable public health laboratory that can provide environmental sampling and analysis in theater. The sampling results can then be used to determine what specific preventive measures and safeguards should be taken to protect troops from harmful exposures and to develop procedures to treat anyone exposed to health hazards. Early in the planning for Operation Joint Endeavor, the Armed Forces Medical Intelligence Center identified potential environmental health threats in Bosnia-Herzegovina as coming primarily from exposures to air, water, and soils contaminated by hazardous industrial waste. In recognition of these potential threats, the Army laboratory was sent to Bosnia-Herzegovina to assist deployed preventive medicine units and to monitor environmental health hazards. While the laboratory was preparing for the mission, USACHPPM deployed an advance monitoring team to the theater in January 1996 to begin sampling the soil and water in the Tuzla area, where most of the U.S. forces were to be located. The laboratory arrived on-site in February 1996 and began conducting more extensive air, water, soil, and other environmental monitoring. In June 1996, USACHPPM augmented the laboratory’s efforts with additional air monitoring stations at nine regional locations in the theater where troops were concentrated. Through January 14, 1997, 2,564 air, water, and soil samples were taken, from which more than 112,000 reportable analyses were done. The results of the sampling indicated that no significant health risks were posed from the water, air, or soil in the theater but that prudent field sanitation measures should be taken. The information USACHPPM obtains through its air, soil, and water sampling is entered into a database, which is then linked with DMDC’s information on the units deployed to the theater. Using mapping data obtained from the National Imaging and Mapping Agency, USACHPPM analysts can then identify which units, if any, are in the most danger of exposure to environmental contaminants. Using this method, which was developed in response to the Gulf War oil fires, and which USACHPPM refers to as its Geographical Information System, DOD can calculate the degree of risk to specific units at specific theater locations and recommend preventive actions, as necessary. Also, on a retrospective basis, USACHPPM can also identify which units in the theater might have been exposed to other types of health threats, such as chemical, biological, or contagious disease threats. However, the troop location information is available only down to the unit level; information on specific locations of individuals within given units is still not available. During the Gulf War, DOD did not systematically track, monitor, and report the types and numbers of diseases and nonbattle injuries experienced by servicemembers. Recognizing that such information would be useful, DOD’s Joint Staff mandated in January 1993 that weekly reports on the rates of diseases and nonbattle injuries be provided to appropriate commanders during all deployments. This is being done during Operation Joint Endeavor. A major purpose of the program is to detect diseases and nonbattle injuries before they become major outbreaks and thereby limit the services’ capabilities to carry out their missions. The weekly reports are categorized into 15 different areas such as respiratory problems, orthopedic injuries, and unexplained fevers. Miscellaneous/administrative visits can also be reported to track immunizations, prescription refills, physical examinations, laboratory tests, and follow-up visits. The data are summarized into theater-wide illness and injury trends so that preventive measures can be identified and forwarded to appropriate theater/field commanders to alert them to any abnormal trends or to actions that should be taken. DOD officials believe the predeployment assessment of environmental health hazards, the environmental sampling, and the medical surveillance monitoring done during Operation Joint Endeavor have enabled better tracking and medical troop surveillance than that available during the Gulf War. In addition, they believe the capabilities now available through USACHPPM and the Army laboratory, capabilities that were not available during the Gulf War, have greatly improved DOD’s ability to monitor and track environmental threats and exposures. Military medical surveillance should include the identification of changes in the health status of servicemembers during and after a deployment. Baseline information on the status of servicemembers’ health before they deploy is highly desirable in determining whether their health status changed during a deployment. Predeployment and postdeployment medical assessments, including blood samples, provide for a comparison from which postdeployment epidemiological analyses can be done. Collecting and maintaining a centralized database of such medical assessment data also facilitate such analyses. During the Gulf War, the absence of data on servicemembers’ health, including both baseline health information and postdeployment health status information, greatly complicated the epidemiological research done by the Institute of Medicine and the Presidential Advisory Committee following the war. DOD’s medical surveillance plan did not require the collection of baseline health status information on servicemembers who deployed during Operation Joint Endeavor. Rather, the services were required to follow their existing service requirements for ensuring that all personnel were medically fit for deployment. Initially, in developing the medical surveillance plan, DOD officials considered collecting a predeployment blood sample for all deploying servicemembers. However, this approach was not followed, according to DOD officials, because (1) DOD already had blood samples that had been drawn during the services’ periodic testing for the Human Immuno-deficiency Virus (HIV), (2) many servicemembers had already deployed when the collection was being discussed, and (3) the collection of blood samples would have been logistically difficult. DOD officials considered the blood samples drawn for the HIV testing to be acceptable baseline samples. Our review, however, found that predeployment blood samples were not available for many servicemembers who deployed under Operation Joint Endeavor and that many of the blood samples, in the repository for servicemembers who deployed, were quite old. More specifically, data from USACHPPM, which oversees the blood repository, show that predeployment blood samples are not available for 2,476 (9.3 percent) of the 26,621 servicemembers who had deployed to Bosnia-Herzegovina as of March 12, 1996. Also, the data show that the last blood samples for 9,266 (38.4 percent) of the 24,145 predeployment blood samples were more than 24 months old. Moreover, the data show that the last blood samples for 1,544 (6.4 percent) of the predeployment blood samples were more than 5 years old. DOD’s draft medical surveillance policy requires a new blood sample to be drawn prior to a servicemember’s deployment when the last blood sample is over a year old. Therefore, the age of these blood samples raises questions as to their reliability as predeployment baseline samples. Postdeployment medical assessments were required for servicemembers who deployed to Bosnia-Herzegovina, Croatia, and Hungary. However, based on our review of documentation in both the Deployment Surveillance Team’s database and the servicemembers’ medical records we reviewed, we concluded that the required assessments were not done for many Army personnel. Moreover, in those instances where postdeployment medical assessments were done, they were done much later than required. For those deployed under Operation Joint Endeavor, two postdeployment medical assessments were to be done—one assessment was to be done in theater shortly before the servicemembers redeployed to their home station and the other at the home station within 30 days of leaving the theater. The assessments consist of the servicemember’s responses to a series of questions to be answered by the servicemember covering the member’s general health status. After completion by the servicemember, a health care provider was required to review the responses to the questions and refer the servicemember for further evaluation, if appropriate. At the time of the in-theater postdeployment medical assessment, medical personnel were required to collect a blood sample and send it to the central blood repository in the United States. If this blood sample was not collected during the in-theater postdeployment medical assessment process, it was to be collected at the time of the home unit postdeployment medical assessment. Postdeployment requirements also included administering a battery of mental health questionnaires designed to identify servicemembers needing further psychological evaluation. Tuberculin skin tests were also required at the servicemembers’ home stations soon after 90 days of departure from the theater. Tuberculosis was considered a potential health threat in the theater. Our review of the Deployment Surveillance Team’s database for the 6,624 Army personnel in our universe requiring medical assessments indicated that 43 percent of the personnel had not received the required in-theater postdeployment medical assessment, 82 percent had not received the home unit postdeployment medical assessment, and 41 percent did not have a postdeployment blood sample drawn. Only 429 (6.5 percent) servicemembers met all three requirements—the in-theater and home unit postdeployment medical assessments and a postdeployment blood sample drawn and in storage. We also found that 1,889 (28.5 percent) had not met any of the three requirements. The Deployment Surveillance Team’s database does not collect information on the extent to which the tuberculin tests are done at the home unit. During our review of the medical documentation for 618 servicemembers in 12 selected Army units requiring postdeployment medical assessments, we found no evidence that the required medical assessments were conducted for many servicemembers. More specifically, as shown in table 2, about 24 percent did not receive the in-theater postdeployment medical assessment, 21 percent did not receive the home unit postdeployment medical assessment, 34 percent did not have a postdeployment blood sample drawn, and 32 percent did not receive the required tuberculin test. Of the 618 servicemembers whose medical records we reviewed, only 206, or one-third, had met all four requirements—the in-theater medical assessment, the home unit medical assessment, the tuberculin test, and a postdeployment blood sample drawn. Conversely, 20 (about 3 percent) of the 618 servicemembers had not met any of the four requirements. Different reasons were cited for lack of (1) in-theater medical assessments and (2) unit medical assessments and the tuberculin tests conducted at the home unit. According to Army medical officials in Germany, the in-theater problem was due to the lack of a centralized out-processing mechanism for redeploying personnel; whereas the home unit problem was due to unit commanders not giving enough emphasis to the medical assessment requirements. More specifically, the U.S. Army Europe (USAREUR) Surgeon attributed the lack of in-theater medical assessments for Army personnel redeploying to their home units before August 1996 to the lack of a fully functioning central out-processing point for redeploying personnel to ensure that they received the required assessments. Beginning in August 1996, all Army personnel redeploying to their home unit from Bosnia-Herzegovina, Croatia, and Hungary were required to go through an intermediate staging base in Hungary, where medical assessments were done. For redeployments, the USAREUR Surgeon believes that compliance with the requirement for in-theater medical assessments would be higher after the staging base became operational. We did not validate whether these improvements, in fact, occurred. Officials with several medical units responsible for the Army units we reviewed told us that they have no direct authority over the unit personnel to require them to obtain the postdeployment medical assessments or tuberculin tests. They must rely on unit commanders to require their personnel to go to the medical clinic for the assessments. Further, home unit medical assessments and the tuberculin test, when done, were frequently done much later than required. The home unit postdeployment medical assessments are required to be conducted within 30 days of servicemembers’ departure from the theater. The 30-day time frame was established to ensure that the required medical assessments are done soon after servicemembers return to their home unit and, from an epidemiological standpoint, if medical problems exist, to be better able to associate the medical problems to the members’ service while deployed. As shown in table 3, most of the home unit medical assessments that were completed for the selected 12 Army units were done much later than the 30 days required—averaging 98 days following departure from the theater. Similarly, the tuberculin tests, required to be done soon after 90 days of the members’ departure from the theater, if done, were done later—an average of 142 days. Such delays in doing the home unit medical assessments, particularly if the assessment also involves the drawing of a postdeployment blood sample, pose concerns regarding epidemiological analyses. With such delays, it is much more difficult to isolate which health problems were attributable to members’ service during deployments and which were contracted after their return to home stations. Also, the delay in doing the assessments could delay the referral of the servicemember for further evaluation and treatment based on this medical assessment. Our review of medical records may have resulted in more medical assessments being done than would otherwise have occurred. In fact, we were told that our planned review of medical records in Germany, which was announced in December 1996, encouraged certain units to complete their home unit postdeployment medical assessments and tuberculin tests in anticipation of our arrival. Four of the 12 units (units A, C, K, and L) completed over 80 percent of the required home unit postdeployment medical assessments and tuberculin tests in January and February 1997, even though the servicemembers had returned to their home units 5 to 8 months earlier. This delay explains much of the timeliness problems experienced by these units discussed earlier. As shown in table 4, the percentage of Army personnel who did not have the home unit postdeployment medical assessment and the tuberculin test was much higher as of December 31, 1996, before our medical records review—increasing from 20.6 percent to 44.5 percent for home unit postdeployment medical assessments and from 31.9 percent to 58.7 percent for tuberculin tests. A complete and accurate database is needed to effectively monitor the extent to which required medical assessments are done. The medical surveillance plan includes provisions for the centralized collection and maintenance of a database for the in-theater and home unit postdeployment medical assessments done for servicemembers deployed under Operation Joint Endeavor. The medical units processing the in-theater and home unit medical assessments are required to send copies of the assessment forms to DOD’s Deployment Surveillance Team. The team uses the data to prepare statistical reports on how well the medical assessment program is being implemented. We tested the completeness of the surveillance team’s centralized database for the in-theater and home unit postdeployment medical assessments conducted for the 618 servicemembers whose medical records we reviewed. We found that the database was incomplete for both assessments—understating considerably the number of home unit medical assessments done. More specifically, the database omitted 57 (12 percent) of the 473 in-theater medical assessments done and 174 (52 percent) of the 332 home unit medical assessments done for the 618 service members whose medical records we reviewed. Complete and accurate medical records documenting all medical care for the individual servicemember are essential for the delivery of high quality medical care. They are also important for epidemiological analyses following military deployments. The Presidential Advisory Committee and the Institute of Medicine reported problems concerning the completeness and accuracy of medical record-keeping during the Gulf War. During the Gulf War, interactions between the deployed forces and medical care providers in the theater were frequently not recorded in servicemembers’ permanent medical records. This problem was particularly common for immunizations given in the theater. The Institute of Medicine characterized DOD’s and the Department of Veterans Affairs’ medical records systems as fragmented, disorganized, and incomplete. Under the Operation Joint Endeavor medical surveillance plan, postdeployment in-theater and home unit medical assessment forms are required to be included in servicemembers’ permanent medical records. Similarly, Army regulations require documentation in servicemembers’ permanent medical records of all immunizations received in theater and visits made by servicemembers to health units such as battalion aid stations. Because the tick-borne encephalitis vaccine is classified by the Food and Drug Administration as an investigational drug, specific requirements apply for documenting its use in servicemembers’ medical records. We tested the completeness of the permanent medical records for selected Army active duty servicemembers who had deployed under Operation Joint Endeavor. Our review disclosed that many of the medical records were incomplete regarding documentation reflecting that (1) in-theater medical assessments were conducted, (2) servicemembers had received the tick-borne encephalitis vaccine, and (3) visits had been made by servicemembers to battalion aid stations. All of these documentation problems pertain to medical care in the theater. Regarding postdeployment medical assessments, we found that 91 (19 percent) of the 473 servicemembers with a postdeployment in-theater medical assessment and 9 (1.8 percent) of the 491 servicemembers with a postdeployment home unit medical assessment did not have the assessments documented in their medical records. USAREUR Surgeon officials attributed these documentation problems to the practice of allowing servicemembers to hand-carry the in-theater assessment forms to their home unit for insertion to their permanent medical records. The officials said the assessment forms were frequently lost. We noted that such documentation problems occurred less frequently for the home unit medical assessments because they were done at the home unit and as such did not need to be forwarded from the theater to the servicemembers’ home units. During the deployment to Bosnia, servicemembers deploying to regions with a threat of tick-borne encephalitis were given the choice of being vaccinated with an investigational drug vaccine. To determine whether the medical records included documentation of servicemembers receiving the vaccine, we obtained a list from the U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID) of servicemembers that received the vaccine and reviewed 588 medical records of servicemembers in selected Army units shown as having received the vaccine. As shown in table 5, 141 (24 percent) of these servicemembers’ permanent medical records did not document the vaccinations. To test the completeness of the permanent medical records for visits made to battalion aid stations by servicemembers while deployed to Bosnia-Herzegovina during Operation Joint Endeavor, we selected 50 entries from the sign-in logs for three battalion aid stations and reviewed those members’ medical records for documentation of the visit. As shown in table 6, about 29 percent of the battalion aid station visits were not documented in the members’ permanent medical records. Army medical officials pointed out that servicemembers had deployed to the theater only with an abstract of their permanent medical records and that any medical documentation generated in the theater should have been routed back to the servicemembers’ home units for inclusion in their medical records, but in many instances, this did not occur. They also mentioned that permanent medical records are still essentially kept in a paper-based system and are therefore subject to having information misfiled or lost. To address medical documentation problems, the Presidential Advisory Committee recommended that DOD direct its attention toward computerizing its theater medical records. An Assistant Surgeon General of the Army also told us that he believes the solution to such documentation problems is the development of a deployable computerized patient record. DOD has a project underway with the goal to have a paperless, filmless computerized medical record for every servicemember, while on active duty, by fiscal year 2000. Further objectives of the project are to standardize medical record-keeping DOD-wide; ensure that medical record information is complete, accurate, and available when needed; and prevent active duty members with disqualifying conditions from being deployed. In addition, plans call for the computerized medical record to document and update the baseline health status of each active duty member, support the recognition of deployment-related illnesses, and provide a mechanism for reporting the medical readiness of the active duty force. Recognizing that DOD’s paper-based medical records are not sufficient to support the growing interest in epidemiology driven by the Gulf War experience, the project officials recommended the development of some type of electronic mechanism to capture health service data for each active duty member at all echelons of care during military operations. Several options for obtaining and recording the necessary information are being considered, but the basic concept involves providing each servicemember with a computerized card or tag that can receive and store computerized health information. When the member reports to a medical unit for care, the card can be updated with the member’s complaint, diagnosis, and treatment (including X-rays). This information would be collected by computer and reported to a central location by the medical unit to allow for overall summarization of medical problems and treatments in a given theater. Long-term recommendations of project officials call for deploying a triservice computerized patient record throughout DOD by fiscal year 2000. Also recommended is the establishment of linkages to external systems through the inclusion of a global positioning history for each individual. Such a record could support the geographical location history developed and being refined by USACHPPM and assist in prospective or retrospective data analysis of factors such as chemical/biological risk exposures to specific troops in the theater. In December 1996, the CINC, U.S. Central Command, issued guidance that included medical surveillance requirements for all forces deployed in Southwest Asia. This guidance is similar to the medical surveillance plan for Operation Joint Endeavor. While implementation of the medical surveillance plan for Southwest Asia began only recently in January 1997, a Joint Staff official told us the plan is being implemented. The official said that an epidemiology team and the Navy’s forward medical laboratory were deployed to the theater to provide on-site medical surveillance. In addition, the official said that predeployment and postdeployment medical assessments are being conducted for the servicemembers in the Southwest Asia theater. We did not test, however, the services’ implementation of the Southwest Asia medical surveillance requirements. DOD officials told us that they delayed issuing a specific medical surveillance plan for Southwest Asia because DOD was developing a joint medical surveillance policy that would cover such deployments. However, when the time required to develop a joint policy took longer than expected, the Joint Staff encouraged the CINC (U.S. Central Command) to issue specific medical surveillance requirements for the deployment. Prior to the issuance of the December 1996 guidance, DOD had conducted some medical surveillance activities, including environmental sampling, in the Southwest Asia theater but had not required medical assessments and postdeployment blood samples for servicemembers deployed there. We believe that the delay in requiring medical assessments and postdeployment blood samples raises concerns, given that U.S. forces have been deployed to this region continuously since the end of the Gulf War and many veterans who served in this region began to complain of medical problems soon after the end of the conflict. Overall, DOD has taken initiatives to overcome the medical surveillance problems experienced during the Gulf War. It is evident that positive steps have been taken to establish a joint policy that will emphasize the importance of medical surveillance and provide for a more uniform approach for doing such surveillance in future deployments. DOD’s recent experience in Operation Joint Endeavor, during which it tried to institute corrective policies and processes to overcome problems experienced during the Gulf War, provides lessons learned that DOD can apply in its ongoing efforts to develop a DOD-wide joint medical surveillance policy. However, the joint policy has been under development for over 2 years. Some of the problems we found in implementing the medical surveillance during Operation Joint Endeavor—the failure to assess all servicemembers’ health in theater and after return to their home units and to consistently document medical care provided in theater—raise serious questions about DOD’s ability to effectively implement medical surveillance policies during another high-conflict deployment like the Gulf War. We recognize that complete record-keeping may be more difficult during times of high intensity combat activities; however, complete record-keeping is still necessary for an effective medical surveillance system. In light of the problems discussed in this report, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Health Affairs, along with the military services, the Joint Chiefs of Staff, and the Unified Commands, as appropriate, to complete expeditiously and implement a DOD-wide policy on medical surveillance for all major deployments of U.S. forces, using lessons learned during Operation Joint Endeavor and the Gulf War; develop procedures to ensure that medical surveillance policies are implemented to include emphasizing (a) the need for unit commanders to ensure that all servicemembers receive the required medical assessments in a timely manner and (b) the need for medical personnel to maintain complete and accurate medical records; and develop procedures for providing accurate and complete medical assessment information to the centralized database. We also recommend that the Secretary of Defense direct the Deputy Under Secretary of Defense for Requirements and Resources to investigate the completeness of information in the DMDC personnel deployment database and take corrective actions to ensure that the deployment information is accurate for servicemembers who deploy to a theater. In commenting on a draft of this report, DOD agreed with the accuracy of the report. It agreed that substantial improvements in medical surveillance and record-keeping were needed based on the Gulf War experience and that some improvements in these areas were applied in the deployment to Bosnia. Likewise, DOD stated that it will apply the lessons from the Bosnia deployment to refine its policy for future medical surveillance during deployments. DOD concurred with each of our four recommendations and stated that with the support of the services, the Chairman of the Joint Chiefs of Staff, and the intelligence community, it will aggressively work to continue to make improvements. For example, DOD stated that, in August 1997, it will disseminate the DOD instruction and directive establishing a DOD-wide policy on medical surveillance. DOD also indicated that it has reviewed its master personnel database deficiencies and developed recommendations to improve its ability to maintain accurate information on servicemembers who deploy. DOD indicated that on February 10, 1997, a message was sent to all unified commanders reemphasizing the importance of a comprehensive medical surveillance program to ensure force readiness and sustainment. DOD noted that it has standardized predeployment and postdeployment questionnaires and has started an automation initiative to enhance accuracy of the centralized database. We believe these initiatives, if properly implemented, could greatly enhance the medical surveillance program. However, DOD’s response did not indicate what its specific procedures will be for institutionalizing these efforts to ensure that all medical surveillance requirements will be met. For example, further procedural improvements would be needed to routinely monitor units’ compliance with the medical surveillance requirements and periodically evaluate the accuracy and completeness of the centralized database. DOD’s comments are presented in appendix II. We are sending copies of this report to the Chairmen and Ranking Minority Members, Senate and House Committees on Appropriations; the Secretaries of Defense, the Army, the Navy, and the Air Force; and the Chairman, Joint Chiefs of Staff. Copies will also be made available to others upon request. Please contact me at (202) 512-5140 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix III. For this report, we interviewed officials and obtained pertinent documentary evidence from officials at the Office of the Assistant Secretary of Defense for Health Affairs; the Joint Staff; and the Offices of the Surgeons General at Army, Navy, and Air Force Headquarters in Washington, D.C. We also interviewed and obtained documents from officials at the Department of Defense’s (DOD) Deployment Surveillance Team and the Persian Gulf Illness Investigation Team at Falls Church, Virginia, and from the U.S. Army Center for Health Promotion and Preventive Medicine at Aberdeen Proving Ground, Maryland; the Institute of Medicine’s Medical Follow-up Agency; the Presidential Advisory Committee on Gulf War Veterans’ Illnesses; the Defense Manpower Data Center in Monterey, California; the U.S. European Command Surgeon’s Office; the U.S. Army Europe Surgeon’s Office; and the U.S. Air Force Europe Surgeon’s Office. To assess the extent to which the required medical assessments, described above, were conducted, we (1) obtained information from the DOD Deployment Surveillance Team’s database in Falls Church, Virginia, and (2) reviewed the medical records for active duty servicemembers in 12 selected Army units in Germany who deployed to Operation Joint Endeavor. To determine the overall status of DOD’s efforts to implement its Operation Joint Endeavor medical surveillance policy, in January 1997, we requested the Deployment Surveillance Team to provide us with information from its database showing those servicemembers in units who deployed to and spent at least 30 days in the countries of Bosnia-Herzegovina, Croatia, and Hungary from the start of Operation Joint Endeavor and had returned to their home units by August 31, 1996. The cutoff date was selected to provide sufficient time for units to forward in-theater and home unit assessment forms and blood samples to the United States and have that information entered into the team’s database. The team then extracted data from its database showing which of these servicemembers had received the required assessments and had a postdeployment blood sample in storage at the central blood repository. This information showed each service’s overall compliance with the Operation Joint Endeavor medical surveillance assessment requirements. After obtaining this information, we decided to limit our review of servicemembers’ medical records to selected Army units because the Army is the largest participant of the services in Operation Joint Endeavor. To select the Army units from which we would review servicemembers’ medical records, we requested the Deployment Surveillance Team to sort the deployment data we had requested by unit, rank-ordered by the units with the largest number of personnel requiring postdeployment medical assessments, without regard to the unit’s rate of compliance with the requirements. We then selected the 12 units in Germany with the largest numbers of personnel requiring medical assessments. These selected units provided a range of different types of units and were located in multiple locations in central Germany. At the responsible medical unit for the selected units, we requested the medical records for those servicemembers on the Deployment Surveillance Team list who required medical assessments to be done. We reviewed the medical records for those servicemembers who were still in the unit and whose medical records were not currently in use by the medical unit at the time of our review. In reviewing these 618 medical records, we determined whether the record included an (1) in-theater medical assessment form, (2) the home unit medical assessment form, and (3) documentation that the required tuberculin test had been done. To determine whether servicemembers who had received the tick-borne encephalitis vaccine had this documented in their medical records, we obtained a list from the U.S. Army Medical Research Institute of Infectious Diseases of all servicemembers who had received one or more doses of the vaccine in units who deployed during Operation Joint Endeavor. From this list, we selected five units located in Germany from the listing and reviewed 588 servicemembers’ medical records to determine whether the medical records documented the vaccinations. To determine whether servicemembers’ visits to Army battalion aid stations were documented in the members’ permanent medical records, we selected three battalion aid stations that deployed to Bosnia-Herzegovina during Operation Joint Endeavor and selected 50 entries from each battalion aid station’s sign-in patient logs. We then reviewed the medical records of those servicemembers to determine whether the visits had been documented. To ensure that we did not overlook any of the appropriate documentation in the medical records during our examinations, the unit medical staff reviewed all of those records in which we could not find required documentation and verified that our examination was accurate. We also discussed reasons for missing documentation in the medical records with officials at the responsible medical units in Germany for those units whose medical records we reviewed. We conducted our review from October 1996 to April 1997 in accordance with generally accepted government auditing standards. Steve J. Fox, Evaluator-in-Charge Lynn C. Johnson, Evaluator William L. Mathers, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO determined what action, if any, the Department of Defense (DOD) has taken to improve medical surveillance before, during, and after deployments, focusing on Operation Joint Endeavor. GAO noted that: (1) DOD has initiated actions to improve its medical surveillance for deployments since the Gulf War; (2) a joint medical surveillance policy, currently under development since late 1994, calls for a comprehensive DOD-wide medical surveillance capability to monitor and assess the effects of deployments on servicemembers' health; (3) provisions of the draft policy address the medical surveillance problems experienced during the Gulf War; however, its success in resolving the problems cannot be assessed until the directive and implementing instruction are finalized and applied to a deployment; (4) DOD officials expect the policy to be finalized by September 1997; (5) after the policy is issued, the services and responsible offices are to develop detailed implementing instructions; (6) DOD has also implemented two comprehensive medical surveillance plans--one for Operation Joint Endeavor in Bosnia-Herzegovina, Croatia, and Hungary, and the other for the current deployment in southwest Asia; (7) these plans address the medical surveillance problems experienced during the Gulf War and specifically call for identifying servicemember deployment information, monitoring environmental health and disease threats, doing personnel medical assessments, maintaining a centralized collection of medical assessment data, and employing certain medical record-keeping requirements; (8) recognizing that this is DOD's first attempt, its success in implementing the medical surveillance plan for Operation Joint Endeavor has been mixed; and (9) although the plan provided for enhanced medical surveillance compared to the Gulf War, GAO's review disclosed the following problems, all of which offer DOD and the services lessons to be learned as they continue to develop their medical surveillance capabilities: (a) the personnel database used for tracking which Air Force and Navy personnel were deployed is considered inaccurate by DOD personnel; (b) many Army personnel who should have received postdeployment medical assessments did not receive them; (c) when postdeployment medical assessments are done, they are frequently done late; (d) the centralized database for collecting both in-theater and home unit postdeployment medical assessments is incomplete for many Army personnel; and (e) many servicemembers' medical records GAO reviewed, maintained by medical units in Germany, were incomplete regarding in-theater postdeployment medical assessments done, medical servicemembers' visits during deployment, and documentation of personnel receiving the tick-borne encephalitis vaccine.
Throughout the disability compensation claims process, VBA staff have various roles and responsibilities. Claims assistants are primarily responsible for establishing the electronic claims folders to determine whether the dispositions of the claims and control actions have been appropriately identified. Veteran service representatives are responsible for providing veterans with explanations regarding the disability compensation benefits programs and entitlement criteria. They also are to conduct interviews, gather relevant evidence, adjudicate claims, authorize payments, and input the data necessary to generate the awards and notification letters to veterans describing the decisions and the reasons for them. Rating veterans service representatives are to make claims rating decisions and analyze claims by applying VBA’s schedule for rating disabilities (rating schedule) against claims submissions; they also are to prepare rating decisions and the supporting justifications. Further, they are to inform the veteran service representative, who then notifies the claimant of the decision and the reasons for the decision. Supervisory veteran service representatives are to ensure that the quality and timeliness of service provided by VBA meets performance indicator goals. They are also responsible for the cost-effective use of resources to accomplish assigned outcomes. Decision review officers are to examine claims decisions and perform an array of duties to resolve issues raised by veterans and their representatives. They may conduct a new review or complete a review of a claim without deference to the original decision; they also can revise that decision without new evidence or clear and obvious evidence of errors in the original evaluation. The disability compensation claims process starts when a veteran (or other designated individual) submits a claim to VA in paper or electronic form. If submitted electronically, a claim folder is created automatically. When a paper claim is submitted, a claims assistant creates the electronic folder. Specifically, when a regional office receives a new paper claim, the receipt date is recorded electronically and the paper files (e.g., medical records and other supporting documents) are shipped to one of four document conversion locations so that the supporting documents can be scanned and converted into a digital image. In the processing of both electronic and paper claims, a veteran service representative reviews the information supporting the claim and helps identify any additional evidence that is needed to evaluate the claim, such as the veteran’s military service records, medical examinations, and treatment records from medical facilities and private medical service providers. Also, if necessary to provide support to substantiate the claim, the department performs a medical examination on the veteran. Once all of the supporting evidence has been gathered, a rating veterans service representative evaluates the claim and determines whether the veteran is eligible for benefits. If so, the rating veterans service representative assigns a disability rating (expressed as a percentage). A veteran who submits a claim with multiple disabilities receives a single composite rating. If the veteran is due to receive compensation, an award is prepared and the veteran is notified of the decision. A veteran can reopen a claim for additional disability benefits if, for example, he or she experiences a new or worsening service-connected disability. If the veteran disagrees with the regional office’s decision on the additional claim, a written notice of disagreement may be submitted to the regional office to appeal the decision, and the veteran may request to have the appeal processed at the regional office by a decision review officer or through the Board of Veterans’ Appeals. Figure 1 presents a simplified view of VA’s disability compensation claims process. VBA began the transformation of its paper-intensive claims process to a paperless environment in March 2009, and the effort became formally established as the Veterans Benefits Management System program in May 2010. VBA’s initial plans for VBMS emphasized the development of a paperless claims platform to fully support the processing of disability compensation and pension benefits, as well as appeals. The program’s primary focus was to convert existing paper-based claims folders into electronic claims folders (eFolders) to allow VBA staff to access claims information and evidence in an electronic format. Beyond the establishment of eFolders, VBMS is intended to streamline the entire disability claims process, from establishment through award, by automating rating decision recommendations, award and notification processes, and communications between VBA and the veteran throughout the claims life cycle. The system is also intended to assist in eliminating the claims backlog and serve as the enabling technology for quicker, more accurate, and integrated claims processing in the future. Moreover, it is to replace many of the key outdated legacy systems— which are still in use today—for managing the claims process, including: Share—used to establish claims; it records and updates basic information about veterans and dependents. Modern Award Processing-Development—used to manage the claims development process, including the collection of data to support the claims and tracking of them. Rating Board Automation 2000—provides information about laws and regulations pertaining to disabilities, which are used by rating specialists in evaluating and rating disability claims. Award—used to prepare and calculate the benefit award based on the rating specialist’s determination of the claimant’s percentage of disability. It is also used to authorize the claim for payment. VBMS is to consist of three modules: VBMS-Core is intended to provide the foundation for document processing and storage during the claims development process, including establishing claims; viewing and storing electronic documents in the eFolder; and tracking evidence requested from beneficiaries. The eFolder serves as a digital repository for all documents related to a claim, such as the veteran’s military service records, medical examinations, and treatment records from VA and Department of Defense medical facilities, and from private medical service providers. Unlike with paper files, this evidence can be reviewed simultaneously by multiple VBA claims processors at any location. VBMS-Rating is to provide raters with Web-accessible tools, including rules-based rating calculators and the capability for automated decision recommendations. For example, the hearing loss calculator is to automate decisions using objective audiology data and rules- based functionality to provide the rater with a suggested rating decision. In addition, the module is expected to include stand-alone evaluation builders—essentially interactive disability rating schedules—for all parts of the human body. With this tool, the rater uses a series of check boxes to identify the veteran’s symptoms and the evaluation builder identifies the proper diagnostic code and the level of compensation based on those symptoms. VBMS-Awards is to provide an automated award and notification process to improve award accuracy and reduce rework associated with manual development of awards. This module is intended to automate and standardize communications between VBA and the veteran at the final stages of the claims process. VBA is using an agile software development methodology to develop, test, and deliver the system’s functionality to its users. An agile approach allows subject matter experts to validate requirements, processes, and system functionality in increments, and to deliver the functionality to users in shorter cycles. Accordingly, the strategic road map that the VBMS Program Management Office is using to guide the system development effort indicated that releases of system functionality were to occur every 6 months. In a March 2013 Senate Veterans Affairs Committee hearing, VA’s Under Secretary for Benefits stated that VBMS development was expected to be completed in 2015. Our September 2015 report noted that, since completing rollout of the initial version of VBMS at all regional offices in June 2013, VBA has continued developing and implementing additional system functionality and enhancements that support the electronic processing of disability compensation claims. As a result, 95 percent of records related to veterans’ disability claims are electronic and reside in the system. However, while the Under Secretary for Benefits stated in March 2013 that the development of the system was expected to be completed in 2015, implementation of functionality to fully support electronic claims processing was delayed until beyond 2015. Specifically, even with the progress VBA has made toward developing and implementing the system, the timeline for initial deployment of a national workload management capability was delayed beyond the originally planned date of September 2014 to October 2015, with additional deployment to occur throughout fiscal year 2016. Efforts undertaken thus far have addressed the strategic road map’s objective to deliver a national workload management capability and have entailed developing the technology and business processes needed to support the National Work Queue, which is intended to handle new disability claims in a centralized queue and assign claims to the next regional office with available capacity. The Program Management Office began work for the National Work Queue in June 2014, and had intended to deploy the first phase of functionality to users in September 2014. However, in late May 2015, the Director of the office informed us that VBA had delayed the initial rollout of the National Work Queue until October 2015 so that the department could fully focus on meeting its goal to eliminate the claims backlog by the end of September 2015. Following the initial rollout, the Program Management Office intends to implement the National Work Queue at all regional offices through fiscal year 2016. Beyond this effort, VBMS program documentation identified additional work to be performed after fiscal year 2015 to fully automate disability claims processing. Specifically, the Program Management Office identified the need to automate steps associated with a veteran’s request for an increase in disability benefits, such as when an existing medical condition worsens. In addition, the Director stated that the Program Management Office intends to develop a capability to automatically associate veterans’ correspondence when a new piece of evidence to support a claim is received electronically or scanned into VBMS. The office also plans to integrate VBMS with VA’s Integrated Disability Evaluation System, which contains the results of veterans’ disability medical examinations, as well as with external systems that contain military service treatment records for veterans, including those at the National Personnel Records Center. Further, while VBMS was planned to support the processing of disability compensation and pension benefits, VBA has not yet developed and implemented end-to-end pension processing capabilities in the system. Without such capabilities, the agency must continue to rely on three legacy systems to process pension claims. Specifically, program officials stated that both the Modern Award Processing-Development and Award legacy systems contain functionality related to processing pensions and will need to remain operational until VBMS can process pension claims. In addition, the Share legacy system contains functionality that is still needed throughout the claims process. Program documentation indicates that the first phase of pension-related functionality is expected to be introduced in December 2015. However, VBA has not yet developed plans and schedules for retiring the legacy systems and for fully developing and implementing their functionality in VBMS. VBA’s progress toward developing and implementing appeals processing capabilities in VBMS also has been limited. Specifically, although the information in a veteran’s eFolder is available to appeals staff for review, the appeals process for disability claims is not managed using the new system. According to VA’s fiscal year 2016 budget submission, the department is pursuing a separate effort to manage end-to-end appeals modernization, and has requested $19.1 million in fiscal year 2016 funds to develop a system that will provide functionality not available in VBMS or other VA systems. The Director of the Program Management Office stated that VBA is currently analyzing commercial IT solutions that can meet the business requirements for appeals, such as providing document navigation capabilities. According to the Director, VBMS, nevertheless, is expected to be part of the appeals modernization solution because components of the system, such as the eFolder and certain workload management functionality, are planned to continue supporting appeals management. In the Director’s view, the fact that VBMS requires additional development beyond 2015 does not reflect a delay in completing the system’s development. Instead, the additional time is a consequence of decisions to enlarge the program’s scope over time. The Director stated that the system’s original purpose had been to serve primarily as an electronic document repository, and that the program has met this goal. In addition, the Director said that, as the program’s mission has expanded to support the department’s efforts to eliminate the disability claims backlog, the office has had to re-prioritize, add, and defer system requirements to accommodate broader departmental decisions and, in some cases, regulatory changes. For example, the office was tasked with developing functionality in VBMS to meet regulatory requirements for processing disability claims using mandatory forms. Officials in the office said they were made aware of this requirement well after system planning for the March 2015 release had been completed, which had introduced significant complexity to their development work. Finally, VBA included in its strategic road map a number of objectives related to VBMS that are planned to be addressed in fiscal year 2016. Officials in the Program Management Office stated that they intend to develop tactical plans that identify expected capabilities to be provided in future releases. Nevertheless, due to the department’s incremental approach to developing and implementing VBMS, VBA has not yet produced a plan that identifies when VBMS will be completed and can be expected to fully support disability and pension claims processing and appeals. Thus, it will be difficult for the department to hold its managers accountable for meeting the time frame and for demonstrating progress. Accordingly, we recommended that the department develop an updated plan for VBMS that includes a schedule for when VBA intends to complete development and implementation of the system, including capabilities that fully support disability claims, pension claims, and appeals processing. VA agreed with our recommendation. Consistent with our guidance on estimating program costs, an important aspect of planning for IT projects, such as VBMS, involves developing a reliable cost estimate to help managers evaluate a program’s affordability and performance against its plans, and provide estimates of the funding required to efficiently execute a program. In 2011, VBA submitted to the Office of Management and Budget a life-cycle cost estimate for VBMS of $934.8 million. This estimate was intended to capture costs for the system’s development, deployment, sustainment, and general operating expenses through the end of fiscal year 2018. However, as of July 2015, the program’s actual costs had exceeded the 2011 life-cycle cost estimate. Specifically, VBMS received approximately $1 billion in funding through the end of fiscal year 2015 and the department has requested an additional $290 million for the program in fiscal year 2016. A significant concern is that the Program Management Office has not reliably updated the VBMS life-cycle cost estimate to reflect the program’s expanded scope and timelines for completion of the system. This is largely attributable to the fact that the office has developed cost estimates for 2-year project cycles that are used for VBMS milestone reviews under the Office of Information and Technology’s Project Management Accountability System. When asked how the Program Management Office arrived at the cost estimates reported in the milestone reviews, program officials stated that they developed rough order of magnitude estimates for each business need based on expert knowledge of the system, past development and engineering experience, and lessons learned. However, while this approach may have provided adequate information for VBA to prioritize VBMS system requirements to be addressed in the next release, it has not produced estimates that could serve as a basis for identifying the system’s funding needs. Because it is typically derived from limited data and in a short time, a rough order of magnitude analysis is not equivalent to a budget-quality cost estimate and may limit an agency’s ability to identify the funding necessary to efficiently execute a program. In addition, the Program Management Office’s annual operating plan, which is generally limited to high-level information about the program’s organization, priorities, staffing, milestones, and performance measures for fiscal year 2015, also shows estimated costs totaling $512 million for VBMS development from fiscal years 2017 through 2020. However, according to the Director of the Program Management Office, this estimate was also developed using rough order of magnitude analysis. Further, the estimate does not provide reliable information on life-cycle costs because it does not include estimated IT sustainment and general operating expenses. Thus, even though the Program Management Office developed rough order of magnitude cost estimates for VBMS, these estimates have not been sufficiently reliable to effectively identify the program’s funding needs. Instead, during the last 3 fiscal years, the Director has had to request an additional $118 million in IT development funds to meet program demands and to ensure support for ongoing development contracts. Specifically, in May 2013, VA requested $13.3 million to support additional work on VBMS. Then, during fiscal year 2014, VA reprogrammed $73 million of unobligated IT sustainment funds to develop functionality to transfer service treatment records from the Department of Defense to VA, and to support development of VBMS-Core functionality. In December 2014, the Program Management Office identified the need for additional fiscal year 2015 funds for ongoing system development contracts for VBMS-Core and VBMS-Awards, and, in late April 2015, department leadership submitted a letter to Congress requesting permission to reprogram $31.7 million to support work on these contracts, the National Work Queue, and other VBMS efforts. According to the Program Management Office Director, the need to request additional funding does not represent additional risk to the program, but is the result of VBMS’s success. The Director further noted that, as the Program Management Office has identified opportunities to increase functionality to improve the electronic claims process, their funding needs have also increased. Nevertheless, evolution of the VBMS program illustrates the importance of continuous planning, including cost estimating, so that trade-offs between cost, schedule, and scope can be effectively managed. Further, without a reliable estimate of the total costs associated with completing work on VBMS, stakeholders will have a limited view of VBMS’s future resource needs and the program is at risk of not being able to secure appropriate funding to fully develop and implement the system. Therefore, we recommended that VA develop an updated plan for VBMS that includes the estimated cost to complete development and implementation of the system. VA agreed with our recommendation. Our and other federal IT guidance recognize the importance of defining program goals and related performance targets, and using such targets to assess progress in achieving the goals. System performance and response times have a large impact on whether staff successfully complete work tasks. If systems are not responding at agreed-upon levels for availability and performance, it can be difficult to ensure that staff will complete tasks in a timely manner. This is especially important in the VBA claims processing environment, where staff are evaluated on their ability to process claims in a timely manner. VBA reported that, since its initial rollout in January 2013, VBMS has exceeded its 95 percent goal for availability. Specifically, the system was available at a rate of 98.9 percent in fiscal year 2013 and 99.3 percent in fiscal year 2014. Through May of fiscal year 2015, it was available for 99.98 percent of the time. Nevertheless, while VBA has reported exceeding its availability goals for VBMS, the system has also experienced periods of unavailability, many times at a critical level affecting all users. Specifically, since January 2013, VBA reported 57 VBMS outages that totaled about 117 hours of system unavailability. The system experienced about 18 hours of outages in January 2014, which were almost entirely at the critical level and affected all users. It reported experiencing only 2 system outages since July 2014—a 30-minute critical outage in December 2014 and a 23- minute critical outage in May 2015. In addition to system availability, VBA monitors system response times for each of the VBMS modules using an application that measures the amount of time taken for each transaction. From September 2013 through April 2015, VBA reported a decrease in average response times for VBMS-Core and VBMS-Rating. It attributed the decrease in response times to continuous engineering improvements to system performance. Program officials also explained that the difference in response times between modules was due to the type of information that is being pulled into each module from various other VBA systems. For example, both VBMS-Core and VBMS-Rating require information from the VBA corporate database, but VBMS-Core is populated with data from multiple VBA systems in addition to the corporate database. Program officials told us that specific goals for mean transaction response times have not been established because they feel that adequate tools are in place to monitor system performance and provide alerts if there are response time issues. For example, VBMS performance is monitored in real time by dedicated staff at a contractor’s facility, users have access to a live chat feature where they can provide feedback on any issues they are experiencing with the system, and the VBMS help desk offers another avenue for users to provide feedback on the system’s performance. The officials also noted that, because transaction response times have decreased, which can be indicative of an improvement to system performance, they are focusing their resources on adding additional functionality instead of trying to get the system to achieve a specific average transaction response time. While VBA’s monitoring of VBMS’s performance is commendable and the system’s performance and response times have improved over time, the system is still in development and there is no guarantee that performance will remain at current levels as the system evolves. Performance targets and goals for VBMS response times would provide users with an expectation of the system response times they should anticipate, and management with an indication of how well the system is performing relative to performance goals. To address this situation, we recommended that the department establish goals for system response time and use the goals as a basis for periodically reporting actual system performance. VA agreed with this recommendation. A key element of successful system testing is appropriately identifying and handling defects that are discovered during testing. Outstanding defects can delay the release of functionality to end users, denying them the benefit of features. Key aspects of a sound defect management process include the planning, identification and classification, tracking, and resolution of defects. Leading industry and government organizations consider defect management and resolution to be among the primary goals of testing. The VBMS program has defect management policies in place and is actively performing defect management activities. Specifically, in October 2012, the department developed the VBMS Program Management and Technical Support Defect Management Plan, which describes the program’s defect management process. The plan was updated in March 2015 and describes, among other things, the process for identifying, classifying, tracking, and resolving VBMS defects. For example, it provides criteria for assigning four different levels of severity for defects— critical, high, medium, and low. According to the plan, critical severity defects are characterized by complete system or subsystem failure, complete loss of functionality, and compromised security or confidentiality. Critical defects also have extensive user impact and do not have workarounds. High severity defects can have major user impact, leading to significant loss of system functionality. Medium severity defects can have moderate user impact and lead to moderate loss of functionality. For high and medium severity defects, workarounds could exist. Low severity defects lead to minor loss of functionality with no workaround necessary. According to the Program Management Office, high, medium, and low severity defects do not need to be resolved prior to a system release. The Program Management Office uses an automated tool to monitor and track defects in the VBMS defect repository. It is used to produce a daily defect management report that is shared with VBMS leadership, and to provide the current status of all open defects identified in testing of a forthcoming VBMS release or identified during production of a previous release. According to the defect management plan, defects can be resolved in a number of different ways, and, once a defect has been fixed, tested, and has passed testing, it is considered done or resolved. Defects that cannot be attributed to an existing requirement are reclassified as a system enhancement and considered resolved, as they do not affect a current system release requirement. A defect is also considered resolved if it is determined to work as designed, duplicate another defect, or if it is no longer evident in the system. From March 2014 through March 2015, the total number of VBMS defects declined as release dates approached for four releases (7.0, 7.1, 8.0, and 8.1). Additionally, to the department’s credit, no critical defects remained at the time of each of these releases. However, even with the department’s efforts to resolve defects prior to a VBMS release, defects that affected system functionality remained open at the time of the releases. Specifically, of the 254 open defects at the time of VBMS release 8.1, 76 were high severity, 99 were medium severity, and 79 were low severity. Examples of medium and high level defects that remained open at the time of VBMS release 8.1 included: E-mail addresses for dependents only occasionally allowed special characters (medium). The intent to file for compensation/pension had an active status for a deceased veteran (medium). Creating a claim in legacy or VBMS would remove the Homeless, POW, and/or Gulf War Registry Flash (high). Disability name appeared incorrectly in Issue and Decision text for amyotrophic lateral sclerosis (ALS) (high). VBMS-Core did not recognize updated rating decisions from VBMS- Rating (high). According to the Program Management Office, these defects were communicated to users and an appropriate workaround for each was established. Nevertheless, even with the workarounds, high and medium severity open defects, which by definition impact system functionality, degraded users’ experiences with the system. Continuing to deploy system releases with defects that impact system functionality increases the risk that these defects will diminish users’ ability to process disability claims in an efficient manner. Accordingly, we recommended that VA reduce the incidence of high and medium severity level defects that are present at the time of future VBMS releases. The department agreed with this recommendation. Our September 2015 report noted that, in addition to having defined program goals and related performance targets, leading practices identify continuous customer feedback as a crucial element of IT project success. Particularly for projects like VBMS, where development activities are iterative, customer and end user perspectives and insights can be solicited through various methods—user acceptance testing, interviews, complaint programs, and satisfaction surveys—to validate or raise questions about the project’s implementation. Further, leading practices emphasize that periodic customer satisfaction data should be proactively used to improve performance and demonstrate the level of satisfaction the project is delivering. The Office of Management and Budget has developed standards and guidelines in survey research that are generally consistent with best practices and call for statistically valid data collection efforts to be used in fulfilling agencies’ customer service data collection. These leading practices also stress the importance of centrally integrating all customer feedback data in order to have more complete diagnostic information to guide improvement efforts. VA has used a variety of methods for obtaining customer and end user feedback on the performance of VBMS. For example, the department solicits end user involvement and feedback in the iterative system development process based on user acceptance criteria. According to the Senior Project Manager for VBMS Development within the Office of Information and Technology, at the end of each development cycle and before a new version of VBMS is deployed, end users are involved in user acceptance testing and a final customer acceptance meeting. The department also provides training to a subset of end users—known as “superusers”—on the updated functionality introduced in a new version of VBMS. These superusers are expected to train the remaining users in the field on the new version’s features. The department tracks the overall satisfaction level with training received after each VBMS major release. However, this tracking is limited to superusers’ satisfaction with the training, rather than with their satisfaction with the system. Further, the department solicits customer feedback about the system through interviews. For example, the Director of the Program Management Office stated that the Under Secretary for Benefits hosts a weekly phone call with bargaining unit employees as a “pulse check” on VBA transformation activities, including VBMS. According to this official, the VBA Office of Field Operations also offers an instant messaging chat service to all regional office employees to solicit feedback about the latest deployment of VBMS functionality. Another method in which the department obtains customer input is through a formal feedback process. For example, according to the Director, VA provides national service desk support to assist users in troubleshooting system issues and identifying system defects. In addition, VBMS applications include a built-in feature that enables users to provide feedback to the Program Management Office on problems with the system. According to the Director, the feedback received by the office also helps to identify user training issues. Nevertheless, while VA has taken these various steps to obtain feedback on the performance and implementation of VBMS, it has not established goals to define user satisfaction that can be used as a basis for gauging the success of its efforts to promote satisfaction with the system. Further, while the efforts that have been taken to solicit users’ feedback provide VBA with useful insights about particular problems, data are not centrally compiled or sufficient for supporting overall conclusions about whether customers are satisfied. In addition, VBA has not employed a customer satisfaction survey of claims processing employees who use the system on a daily basis to process disability claims. Such a survey could provide a more comprehensive picture of overall customer satisfaction and help identify areas where the system’s development and implementation efforts might need additional attention. According to the Director of the Program Management Office, VBA has not used a survey to solicit feedback because of concern that such a mechanism may negatively impact the efficiency of claims processors in completing disability compensation claims on behalf of veterans. Further, the Director believed that the office had the benefit of receiving ongoing end user input on VBMS by virtue of the intensive testing cycles, as well as several of the other mechanisms by which end users have provided ongoing feedback. Nevertheless, without establishing user satisfaction goals and collecting the comprehensive data that a statistically valid survey can provide, the Program Management Office limits its ability to obtain a comprehensive understanding of VBMS users’ satisfaction with the system. Thus, VBA could miss opportunities to improve the efficiency of its claims process by increasing satisfaction with VBMS. Therefore, we recommended that VA develop and administer a statistically valid survey of VBMS users to determine the effectiveness of steps taken to make improvements in users’ satisfaction. The department agreed with this recommendation. In response to a statistical survey that we administered, most of the VBMS users reported that they were satisfied with the system that had been implemented at the time of the survey. These users (claims assistants, veteran service representatives, supervisory veteran service representatives, rating veterans service representatives, decision review officers, and others) were satisfied with the three modules of VBMS. Specifically, an estimated 59 percent of the claims processors were satisfied with VBMS-Core; an estimated 63 percent were satisfied with the Rating module, and an estimated 67 percent were satisfied with the Awards module. Nevertheless, while a majority of users were satisfied with the three modules, decision review officers expressed considerably less satisfaction than other users with VBMS-Core and VBMS-Rating. Specifically, for VBMS-Core, an estimated 27 percent of decision review officers were satisfied compared to an estimated 59 percent of all roles of claims processors (including decision review officers) who were satisfied. In addition, for VBMS-Rating, an estimated 38 percent of decision review officers were satisfied, compared to an estimated 63 percent of all roles of claims processors. Decision review officers were considerably less satisfied with VBMS in comparison to all roles of claims processors in additional areas. For example, an estimated 26 percent of decision review officers viewed VBMS-Core as an improvement over the previous legacy system or systems for establishing claims and storing and reviewing electronic documents related to a claim in an eFolder. In contrast, an estimated 58 percent of all users (including decision review officers) viewed the Core module as an improvement. In addition, an estimated 26 percent of decision review officers viewed VBMS-Rating as an improvement over the previous systems with respect to providing Web-accessible tools, including rules-based rating calculators, to assist in making claims rating decisions. In contrast, an estimated 55 percent of all roles of claims processors viewed the Rating module as an improvement. For VBMS-Awards, an estimated 61 percent of all roles viewed this module as an improvement over the previous systems to automate the award and notification process. Similarly, in considering the three modules, a majority of users (including decision review officers) would have chosen VBMS over the legacy system or systems. However, decision review officers indicated that they were less likely to have chosen VBMS-Core and VBMS-Rating over legacy systems. Specifically, an estimated 27 percent of decision review officers would have chosen VBMS-Core compared to an estimated 60 percent of all roles of claims processors. In addition, an estimated 27 percent of decision review officers would have chosen VBMS-Rating compared to 61 percent of all roles that would have chosen the system over the legacy system or systems. For VBMS-Awards, an estimated 67 percent of all roles would have chosen this module over the previous systems. Decision review officers perform an array of duties to resolve claims issues raised by veterans and their representatives. They may also conduct a new review or complete a review of a claim without deference to the original decision, and, in doing so, must click through all documents included in the e-Folder. Survey comments from decision review officers stated, for example, that reviews in the VBMS paperless environment take longer because of the length of time spent loading, scrolling, and viewing each document (particularly if the documents are large, such as a service medical record file). Additionally, multiple decision review officers commented that it is easier and faster to review documents in a paper file. Although such comments provide illustrative examples of individual decision review officers’ views and are not representative, according to the Director of the Program Management Office, decision review officers’ relative dissatisfaction is not surprising because the system does not yet include functionality that supports their work, which primarily relates to appeals processing. To improve this situation, we recommended that VA establish goals that define customer satisfaction with the system and report on actual performance toward achieving the goals based on the results of our survey of VBMS users and any future surveys VA conducts. The department concurred with this recommendation. In conclusion, while VA has made progress in developing and implementing VBMS, additional capabilities to fully process disability claims were delayed beyond when the system’s completion was originally planned. Further, in the absence of a plan that identifies when and at what cost the system can be expected to fully support disability compensation and pension claims processing and appeals, holding VA management accountable for meeting a schedule, while ensuring sufficient program funding, will be difficult. Also, without goals for system response times, users do not have an expectation of the response times they can anticipate, and management lacks an indication of how well the system is performing. Furthermore, continuing to deploy system releases with defects that impact functionality increases the risk that these defects will diminish users’ ability to process disability claims in an efficient manner. Lastly, although the results of our survey provide VBA with useful data about users’ satisfaction with VBMS (e.g., the majority of users are satisfied), without having goals to define user satisfaction, VBA does not have a basis for gauging the success of its efforts to improve the system. As we stressed in our report, attention to these issues can improve VA’s efforts to effectively complete the development and implementation of VBMS. Fully addressing our recommendations, as VA agreed to do, should help the department give appropriate attention to these issues. Chairman Miller, Ranking Member Brown, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. For further information about this testimony, contact Valerie C. Melvin at (202) 512-6304 or melvinv@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs are listed on the last page of the testimony. Other key contributors to this testimony include Mark Bird (assistant director), Kavita Daitnarayan, Kelly Dodson, Nancy Glover, Brandon S. Pettis, and Eric Trout. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
VBA pays disability benefits for conditions incurred or aggravated while in military service, and pension benefits for low-income veterans who are either elderly or have disabilities unrelated to military service. In fiscal year 2014, the department paid about $58 billion in disability compensation and about $5 billion in pension claims. The disability claims process has been the subject of attention by Congress and others, due in part to long waits for processing claims and a large backlog of claims. To process disability and pension claims more efficiently, VA began development and implementation of an electronic, paperless system—VBMS—in 2009. This statement summarizes GAO's September 2015 report ( GAO-15-582 ) on (1) VA's progress toward completing the development and implementation of VBMS and (2) the extent to which users report satisfaction with the system. As GAO reported in September 2015, the Veterans Benefits Administration (VBA) within the Department of Veterans Affairs (VA) has made progress in developing and implementing the Veterans Benefits Management System (VBMS), with deployment of the initial version of the system to all of its regional offices as of June 2013. Since then, VBA has continued developing and implementing additional system functionality and enhancements that support the electronic processing of disability compensation claims. As a result, 95 percent of records related to veterans' disability claims are electronic and reside in the system. However, VBMS is not yet able to fully support disability and pension claims, as well as appeals processing. Nevertheless, while the Under Secretary for Benefits stated in March 2013 that the development of VBMS was expected to be completed in 2015, implementation of functionality to fully support electronic claims processing has been delayed beyond 2015. In addition, VBA has not yet produced a plan that identifies when the system will be completed. Accordingly, holding VA management accountable for meeting a time frame and for demonstrating progress will be difficult. As VA continues its efforts to complete development and implementation of VBMS, three areas could benefit from increased management attention. Cost estimating: The program office does not have a reliable estimate of the cost for completing the system. Without such an estimate, VA management and the department's stakeholders have a limited view of the system's future resource needs, and the program risks not having sufficient funding to complete development and implementation of the system. System availability: Although VBA has improved its performance regarding system availability to users, it has not established system response time goals. Without such goals, users do not have an expectation of the system response times they can anticipate and management does not have an indication of how well the system is performing relative to performance goals. System defects : While the program has actively managed system defects, a recent system release included unresolved defects that impacted system performance and users' experiences. Continuing to deploy releases with large numbers of defects that reduce system functionality could adversely affect users' ability to process disability claims in an efficient manner. VA has not conducted a customer satisfaction survey that would allow the department to compile data on how users view the system's performance, and ultimately, to develop goals for improving the system. GAO's survey of VBMS users found that a majority of them were satisfied with the system, but decision review officers were considerably less satisfied. Although the results of GAO's survey provide VBA with data about users' satisfaction with VBMS, the absence of user satisfaction goals limits the utility of survey results. Specifically, without having established goals to define user satisfaction, VBA does not have a basis for gauging the success of its efforts to promote satisfaction with the system, or for identifying areas where its efforts to complete development and implementation of the system might need attention. In its September 2015 report, GAO recommended that VA develop a plan with a time frame and a reliable cost estimate for completing VBMS, establish goals for system response time, minimize the incidence of high and medium severity system defects for future VBMS releases, assess user satisfaction, and establish satisfaction goals to promote improvement. VA concurred with GAO's recommendations.
According to DOD’s guidance, the immediate goal of stability operations often is to provide the local populace with security, restore essential services, and meet humanitarian needs. The long-term goal is to help develop indigenous capacity for securing essential services, a viable market economy, rule of law, democratic institutions, and a robust civil society. Stability operations include a continuum of activities that can occur throughout the spectrum of conflict ranging from preconflict stabilization to postconflict reconstruction and transition to effective governance. DOD has identified six major activities, or major mission elements, that U.S. military forces, civilian government agencies, and in many cases multinational partners may need to engage in to stabilize an environment and build sustainable host-nation capabilities. Figure 1 depicts these major mission elements. As Figure 1 illustrates, the mission elements, or dimensions, of stability operations range from establishing and maintaining a secure environment to delivering humanitarian assistance, economic support, and establishing effective forms of governance. As shown in the figure, DOD envisions one key element—strategic communications—as encompassing all of the other five mission elements. DOD guidance recognizes that many stability operations are best performed by indigenous, foreign, or U.S. civilian professionals and that DOD’s participation may be in a supporting role. However, this guidance also states U.S. military forces shall be prepared to perform all tasks necessary to establish or maintain order when civilians cannot do so. NSPD-44 outlines the major roles and responsibilities throughout the government for stability operations, including the responsibilities of the National Security Council, State, non-DOD agencies, and DOD. In November 2005, DOD issued DOD Directive 3000.05, which established the department’s overall policy and assigned responsibilities within DOD for planning, training, and preparing to conduct and support stability operations. Table 1 highlights several key responsibilities established by NSPD-44 and DOD Directive 3000.05. Within DOD, the Office of the Under Secretary of Defense for Policy is responsible for developing stability operations policy options for the Secretary of Defense and, according to DOD officials, provides oversight for the implementation of DOD’s stability operations policy. Under DOD Directive 3000.05, the Secretaries of the Military Departments and the Commander of U.S. Special Operations Command, in coordination with the Chairman of the Joint Chiefs of Staff and the Under Secretary of Defense for Policy, shall each develop stability operations capabilities. Commanders of the geographic combatant commands through the Chairman, Joint Chiefs of Staff, shall identify stability operations requirements within their areas of responsibility, shown in figure 2. Combatant commands are also directed to engage other organizations in stability operations planning, training, and exercises, in coordination with the Joint Staff and the Office of Policy. The department has recently announced that it plans to realign these areas of responsibility to establish a new geographic combatant command for the continent of Africa. As of February 2007, the details of this realignment had not been finalized. DOD has developed and continues to evolve an approach to enhance its stability operations capabilities, but it has encountered challenges in identifying capability gaps and developing measures of effectiveness, which are critical to successfully executing this approach. Among the many improvement efforts underway, the department has taken three key steps that frame this new approach. Specifically, the department has: (1) formalized a new stability operations policy that elevated stability operations to a core mission and gave them priority comparable to combat operations, and assigned numerous responsibilities to DOD organizations, (2) expanded DOD’s planning construct to more fully address stability operations, and (3) defined a new joint operating concept that will serve as a basis for how the military will support stabilization, security, transition, and reconstruction operations in the next 15 to 20 years. However, DOD has made limited progress in identifying and prioritizing needed capabilities, and in developing measures of effectiveness, which are critical steps required by DOD’s new directive and important tenets of performance-based management. Capability gaps are not being assessed because the department has yet to issue adequate guidance on how to conduct these assessments or set specific time frames to complete them. Similarly, the department has made limited progress in developing measures of effectiveness because current guidance does not clearly articulate a systematic approach on how to develop measures of effectiveness. Without a comprehensive assessment of stability operations capability gaps and clear measures of effectiveness, the department may not be appropriately prioritizing and developing the needed capabilities, or measuring its progress toward achieving these goals. In the past 18 months, DOD has taken positive steps to improve stability operations capabilities by establishing a new and comprehensive policy, planning guidance, and joint operating concept. First, in November 2005, DOD published DOD Directive 3000.05, which established DOD’s stability operations policy and assigned responsibilities within the department for planning, training, and preparing to conduct and support stability operations. This directive reflects a fundamental shift in DOD’s policy because it designates stability operations as a core mission that shall be given priority comparable to combat operations and emphasizes that integrated military and civilian efforts are key to successful stability operations efforts. According to DOD officials, this publication is intended to serve as a catalyst, pushing DOD to develop methods to enhance its own capabilities and integrate the capabilities and capacities of the defense, diplomatic, and development communities for achieving unity of effort in stability operations. The policy emphasizes that integrating civilian and military efforts is key to successful stability operations and recognizes that stability operations will not always be led by the military, and that DOD needs to be prepared to provide support to both government and nongovernment organizations when necessary. The directive assigns responsibility for approximately 115 tasks to 18 organizations in the department, such as the Under Secretaries for Policy and Intelligence, the Chairman of the Joint Chiefs of Staff, the Combatant Commanders, and the Secretaries of the Military Departments. The directive states that stability operations skills, such as language capabilities and regional area expertise, be developed and incorporated into professional military education at all levels, and that information shall be shared with U.S. departments and agencies, foreign governments and forces, international organizations, Nongovernmental Organizations, and the members of the private sector supporting stability operations, consistent with legal requirements. The policy also states that military plans shall address stability operations throughout all phases of an operation or plan as appropriate, and that stability operations dimensions of military plans be exercised and tested, when appropriate, with other U.S. departments and agencies. In addition, the directive states that the Under Secretary for Policy shall submit a semiannual report developed in coordination with responsible DOD components to the Secretary of Defense evaluating the department’s progress in implementing the directive. A second step taken by DOD to improve stability operations was to broaden its military planning guidance for joint operations to include noncombat activities to stabilize countries or regions and prevent hostilities; and postcombat activities that emphasize stabilization, reconstruction, and transition governance to civil authorities. Figure 3 illustrates the change in DOD planning guidance. As shown in Figure 3, previous Joint Staff planning guidance considered four operational phases, including deter and engage the enemy, seize the initiative, conduct decisive operations, and transition to peaceful activities. The revised planning guidance now directs consideration of six phases of an operation, which include shaping efforts to stabilize regions so that conflicts do not develop, and expanding the dimensions of stability operations that are needed in more hostile environments after conflicts occur. This new planning guidance requires planners to consider the types of activities that can be conducted to help a nation establish a safe and secure environment, eliminating the need for armed conflict, and activities to assist a nation in establishing security forces and governing mechanisms to transition to self-rule. These are also the phases of an operation that will require significant unity of effort and close coordination between DOD and other federal agencies. In December 2006, DOD took a third step in outlining its approach to stability operations when the Joint Forces Command published the Military Support to Stabilization, Security, Transition, and Reconstruction Operations Joint Operating Concept. This operating concept describes how the future Joint Force Commander will provide military support to stabilization, security, transition, and reconstruction operations within a military campaign in pursuit of national strategic objectives in the 2014– 2026 time frame. The operating concept focuses on the full range of military support that the future Joint Force might provide in foreign countries across the continuum from peace to crisis and conflict in order to assist a state or region that is under severe stress or has collapsed due to either a natural or man-made disaster. This publication provides a conceptual framework for how future commanders can provide military support in foreign countries to a full range of stabilization, security, transition, and reconstruction operations, such as assist an existing or new host nation government in providing security, essential public services, economic development, and governance following the significant degradation or collapse of the government’s capabilities due to internal failure or as a consequence of the destruction and dislocation of a war; provide support to stabilize and administer occupied territory and care for refugees in major combat operations fought for limited objectives that fall short of forcibly changing the adversary regime; support a fragile national government that is faltering due to serious internal challenges, which include civil unrest, insurgency, terrorism and factional conflict; assist a stable government that has been struck by a devastating natural provide limited security cooperation assistance to a state that is facing modest internal challenges; and provide military assistance and training to partner nations that increase their capability and capacity to conduct stabilization, security, transition, and reconstruction operations at home or abroad. This publication is intended to complement both policy and planning guidance by expanding the understanding of stability operations and by providing leaders with a conceptual explanation of the strategic considerations, solutions, risks and mitigations, and implications to consider when planning a stability operation. In addition to establishing a new policy, revising planning guidance, and developing a new joint operating concept, DOD has taken other complementary actions to address stability operations capabilities within the department. For example, in order to follow up on initiatives identified in the 2006 Quadrennial Defense Review, the department has published a series of roadmaps on specific topics such as Building Partnership Capacity. The Building Partnership Capacity Roadmap provides an action plan to meet objectives focused on strengthening interagency planning and enhancing both DOD and non-DOD capabilities in this area. Another step taken by DOD was to work with the Department of State to develop a draft planning guide for other federal agencies that is intended to assist these organizations in the planning for reconstruction and stabilization operations. DOD Directive 3000.05 tasked several organizations within the department to take specific actions to identify and prioritize stability operations capabilities, but the department has made limited progress in meeting this goal. Specifically, the directive states that the Under Secretary of Defense for Policy shall identify DOD-wide stability operations capabilities and recommend priorities to the Secretary of Defense. The Chairman of the Joint Chiefs of Staff is tasked to identify stability operations capabilities and assess their development. The Geographic Combatant Commanders, responsible for contingency planning and commanding U.S. forces in their regions, shall identify stability operations requirements. Finally, the Secretaries of the Military Departments and Commander of U.S. Special Operations Command are required to develop the required stability operations capabilities and capacity in coordination with the Chairman of the Joint Chiefs of Staff and the Under Secretary of Defense for Policy. Officials from the Under Secretary of Defense for Policy’s office stated they intended to meet the requirement to identify capabilities and recommend priorities to the Secretary of Defense through an iterative process known as capability gap assessments. Policy officials envisioned that the geographic combatant commands would conduct theater-specific, scenario-driven assessments of forces and capabilities required for contingencies through DOD’s planning process. They also expected that the geographic commands would compare the planned requirements for stability operations with the current available forces and military capabilities, and propose remedies for eliminating the gaps. DOD officials described the Joint Staff’s role as to review each of the combatant command assessments and provide guidance, including common standards and criteria, to the combatant commands to assist them in identifying their requirements. The combatant command requirements were then expected to drive each service’s development of stability operations capabilities and capacity. As discussed below, as of March 2007, DOD has made limited progress in identifying and prioritizing needed capabilities following this iterative capability gap assessment process. At the three combatant commands that we visited, we found that the identification of stability operations requirements was occurring in a fragmented manner. At Central Command, officials from the command’s assessment branch explained that there has been increased emphasis on stability operations across the command, especially for nonlethal activities, such as civil military operations. Officials explained that organizations at the command level routinely conduct capability assessments and turn in a list of shortfalls for incorporation into the command’s consolidated integrated priority list that the Combatant Command Commander submits annually to the Joint Chiefs of Staff. They envision that in the future these lists will include stability operations requirement shortfalls. Similarly, in the European Command, various organizations are independently conducting assessments within their respective areas. For example, within the combatant command headquarters, training officials explained that they were working on a consolidated and prioritized list of stability operations training requirements, while at the Naval component command they are evaluating each country within its region to identify the specific stability operations requirements for that country. At the Pacific Command, officials stated that they had not tasked any of their component commands to identify stability operations requirements. However, component command officials indicated that capability requirements would be identified through routine processes, such as DOD’s required Joint Quarterly Readiness Review. Notwithstanding the lack of identification of specific requirements from combatant commanders, each service is taking some steps to improve stability operations capabilities, but each service is using a different approach. For example, Marine Corps officials highlighted the establishment of a program to improve cultural awareness training, increased civil affairs planning in its operational headquarters, and the establishment of a Security Cooperation Training Center as key efforts to improve stability operations capabilities. Navy officials highlighted the service efforts to align its strategic plan and operations concept to support stability operations, the establishment of the Navy Expeditionary Combat Command, and the dedication of Foreign Area Officers to specific countries as their key efforts. Army officials highlighted the establishment of an office dedicated to stability operations policy and strategy, the development of Army doctrine related to stability operations, and an ongoing process to address gaps in Army stability operations capabilities and capacities. Army officials expect to approve an action plan by the end of fiscal year 2007 that is intended to provide solutions for improving its capabilities to conduct stability operations. Air Force officials emphasized the service’s use of an analytical capabilities-based planning model that has identified and begun to address specific shortfalls related to stability operations. Because of the fragmented efforts being taken by combatant commands to identify requirements, and the different approaches taken by the services to develop capabilities, the potential exists that the department may not be identifying and prioritizing the most critical capabilities needed by the combatant commanders, and the Under Secretary of Defense for Policy has not been able to recommend capability priorities to the Secretary of Defense. The department recognizes the importance of successfully completing these capability assessments, and in the first semiannual report on stability operations to the Secretary of Defense, the Under Secretary stated that the department has not yet defined the magnitude of DOD’s stability operations capability deficiencies, and that clarifying the scope of these capability gaps continues to be a priority within the department. We identified two factors that are limiting DOD’s ability to carry out the capability gap assessment process envisioned by the Office of Policy. First, at the time of our review, DOD had not issued guidance or set specific timeframes for the combatant commands to identify stability operations capability requirements. Joint Staff officials explained that the combatant commanders were expected to identify capability requirements based on operational plans, and DOD has not issued its 2007 planning guidance to the combatant commanders that reflect the new six-phase approach to planning previously discussed in this report. Joint Staff officials expressed concerns that if the combatant commands based their requirements on existing plans that have not been updated to reflect new planning guidance, the requirements would not reflect the more comprehensive stability operations capabilities needed. A second factor contributing to the limited progress in completing capability gap assessments is confusion over how to define stability operations. For example, Air Force officials stated in their May 22, 2006, Stability Operations Self Assessment that the absence of a common lexicon for stability operations functions, tasks, and actions results in unnecessary confusion and uncertainty when addressing stability operations. In March 2007 they reiterated that they still consider this lack of a common lexicon an issue in identifying stability operations capabilities. Central Command and Pacific Command officials equated stability operations with activities conducted under the auspices of Theater Security Cooperation, while European Command officials stated that stability operations are what they do in every country they have a presence. This lack of a clear and consistent definition of stability operations has caused confusion across the department about how to identify activities that are considered stability operations, and commanders have difficulty identifying what the end state is for which they need to plan. Officials with DOD’s Office of Policy have recognized that confusion exists surrounding the definition of stability operations, and stated they are taking actions to clarify it. For example, Office of Policy officials cited a revised definition of stability operations that has been incorporated into DOD’s September 2006 planning guidance discussed previously in this report, and the office is considering a more aggressive outreach program that will help DOD officials at all levels better understand the definition and application of stability operations concepts in identifying and addressing capability gaps. However, without clear guidance on how and when combatant commanders are to develop stability operations capability requirements, the combatant commanders and the military services may not be able to effectively identify and prioritize needed capabilities. Past GAO work on DOD transformation reported the advantages of using management tools, such as performance measures, to gauge performance in helping organizations successfully manage major transformation efforts. Good performance measures are an important results-oriented management tool that allows DOD to determine the extent to which individual goals contribute to progress in achieving the overall goal of increasing stability operations capability. GAO’s previous work highlighted that the elements of a performance measure should include a baseline and target; be objective, measurable, quantifiable; and include a time frame. Clear, well-developed and coordinated performance measures help ensure that stakeholders are held responsible and accountable for completing their tasks in a timely manner and to an agreed-upon standard. Results- oriented measures further ensure that it is not the task itself being evaluated, but progress in achieving the intended outcome. DOD has recognized the need for performance measures to evaluate its progress in enhancing stability operations goals and objectives. Specifically, DOD Directive 3000.05 requires each organization tasked under the directive to develop measures of effectiveness to evaluate progress in meeting its goals. According to Office of Policy officials the intent for developing measures of effectiveness was to let stakeholders take ownership in identifying the metrics and procedures for evaluating their assigned tasks. These officials also explained that as each organization develops a measure of effectiveness, the Office of Policy will review the proposed measure, provide feedback, and assist the stakeholders in refining the metrics to ensure that the measure is adequate. Policy officials expect that some measures will be quantitative, while others will be qualitative. This approach is based on the premise that the directive did not intend to place a fixed methodology on the stakeholders, would allow development of a process that was flexible enough to evolve with future stability operations activities and requirements, and would motivate change at the lowest level. Despite this emphasis on developing performance measures, however, as of March 2007 we found that limited progress has been made in developing measures of effectiveness because of significant confusion over how this task should be accomplished, and because of minimal guidance provided by the Office of Policy. Specifically, in initial discussions with us, the Army had indicated that it was working on an Action Plan for Stability Operations, but had placed the process on hold pending guidance from DOD. More recently, despite the lack of guidance, the Director of the Army’s Stability Operations Division told us that it is taking steps to finalize the Action Plan for Stability Operations and once it is approved will track all of the responsibilities outlined in DOD 3000.05 through its Strategic Management System. Army officials have also established May 2007 as an objective for developing and refining its performance-based metrics. Air Force officials explained that they already conduct a biennial review of Air Force Concepts of Operations that produces a stability operations assessment and that the results of its 2005 review were summarized and provided to DOD. Air Force officials indicated that in their opinion, this satisfied the requirement to develop performance measures for stability operations. As of March 2007, officials from the Navy’s Office of Strategy and Concepts explained that the Navy has begun efforts to implement a stability operations action plan that includes developing metrics and measures of effectiveness, but have put the process on hold pending metrics guidance from DOD. Similarly, the Marine Corps’s Action Plan for Stability, Security, Transition, and Reconstruction dated February 2007 shows that the Marine Corps is also still waiting for additional guidance from DOD on developing measures of effectiveness. Within the combatant commands, Pacific Command officials explained that they were still waiting for guidance on implementing the directive from the Office of Policy and had not tasked the component commands with any implementing tasks, including developing metrics. At Central Command a policy official told us that there had been no development of measures of effectiveness relative to the directive. In DOD headquarters, officials in the Office of Personnel and Readiness stated that they expected the development of measures of effectiveness to be problematic, for both themselves and the Office of Policy, and that they were unsure how the measures would be developed for their office. Officials from DOD’s office for stability operations stated they are aware of the confusion surrounding the development of measures of effectiveness and that in the next few months they plan to sponsor a workshop to help train individuals on developing measures of effectiveness. While these workshops can be a positive step, they will only benefit those who participate. Without clear departmentwide guidance on how to develop measures of effectiveness and milestones for completing them, confusion may continue to exist within the department and progress on this important management tool may be significantly hindered. Moreover, without central oversight of the process to develop measures of effectiveness, including those that address identifying and developing stability operations capabilities, the department will be limited on its overall ability to gauge progress in achieving stability operations goals and objectives. DOD is taking steps to develop more comprehensive plans related to stability operations, but it has not established adequate mechanisms to facilitate and encourage interagency participation in the development of military plans developed by the combatant commanders. Recent military operations in Afghanistan and Iraq, along with the overall war on terrorism, have led to changes in national security and defense strategies and an increased governmentwide emphasis on stability operations. NSPD-44 states that lead and supporting responsibilities for agencies and departments will be designated using the mechanism outlined in NSPD-1. In some cases, per NSPD-44, the National Security Council may direct the Department of State to lead the development of stabilization, security, transition, and reconstruction plans for specific countries. However, the combatant commanders also routinely develop a wide range of military plans for potential contingencies for which DOD may need to seek input from other agencies or organizations. Within the combatant commands where contingency plans are developed, the department is either beginning to establish working groups or is reaching out to U.S. embassies on an ad hoc basis to obtain interagency perspectives. But this approach can be cumbersome, does not facilitate interagency participation in the actual planning process, and does not include all organizations that may be able to contribute to the operation being planned for. Combatant Commanders have achieved limited interagency participation in the development of military plans because: (1) DOD has not provided specific guidance to commanders on how to integrate planning with non-DOD organizations, (2) DOD practices inhibit the appropriate sharing of planning information with non-DOD organizations, and (3) DOD and non- DOD organizations lack an understanding of each other’s planning processes and capabilities, and have different planning cultures and capacities. As a result, the overall foundation for unity of effort in stability operations—common understanding of the purpose and concept of the operation, coordinated policies and plans, and trust and confidence between key participants—is not being achieved. As previously discussed, NSPD-44 states that the Secretary of Defense and the Secretary of State will integrate stabilization and reconstruction contingency plans with military contingency plans when relevant and appropriate and will develop a general framework for fully coordinating stabilization and reconstruction activities and military operations at all levels where appropriate. DOD Directive 3000.05 has placed significant emphasis on the interagency nature of stability operations and the need for a coordinated approach to integrate the efforts of government and nongovernment organizations. Specifically, the Directive requires the geographic combatant commanders to engage relevant U.S. departments and agencies, foreign governments and security forces, international organizations, nongovernment organizations, and members of the private sector in stability operations planning, training, and exercising, as appropriate, in coordination with the Chairman, Joint Chiefs of Staff, and the Undersecretary of Defense for Policy. Beyond this directive, combatant commanders also have the overall responsibility to plan for a wide range of military operations, such as potential military conflicts, other operations to stabilize fragile governments or regions, or to respond to unexpected events such as the Tsunami relief effort in 2005. As a result, combatant commanders now have an expanding responsibility to coordinate these planning efforts with representatives from various U.S. agencies, organizations, other governments, and the private sector. Combatant commanders develop military plans focused at three distinct, yet overlapping, levels that help commanders at each level visualize a logical arrangement of operations, allocate resources, and assign tasks. Figure 4 illustrates these levels, and the type of planning that occurs in each. As illustrated in figure 4, at the strategic level, planners prepare what is known as the supported plan, which describes how a combatant commander intends to meet the national or high-level goals for his geographical area of responsibility. These plans assign responsibilities for specific strategic goals to other organizations and subordinate commands, but do not provide the details for how these goals will be accomplished. Generally, component commands (Army, Navy, Marine Corps, and Air Force forces assigned to the combatant commander) prepare operational and tactical level plans, which are intended to provide an increasing level of detail and fidelity to the plans and are referred to as supporting plans. It is at this level of planning that planners develop specific details about actions that will be taken and how resources will be applied to achieve the objectives outlined in the strategic level plan. At the operational and tactical levels, military planners need knowledge of the resources they can rely on from other agencies for conducting operations and who will be on the ground that they can coordinate with for information and integration of activities. To achieve a fully integrated strategic, operational, or tactical plan, DOD planners require increased knowledge of the roles, responsibilities, and capabilities that all agencies and organizations can contribute to stabilization efforts. DOD policy officials responsible for developing planning guidance have stated that interagency planning in military operations can no longer be an afterthought, but is critical to realizing U.S. interests in future conflicts. We found almost universal agreement between all organizations included in our review that there needs to be more interagency coordination in planning, and that these coordination requirements differ at the strategic, operational, and tactical levels of planning. For example, officials agreed that at the strategic level, the many organizations that can play a key role in stability operations should be present to represent their respective organizations, and that those representatives can help facilitate a mutual understanding of the overall contributions, capabilities, and capacity of each organization. These representatives can also develop a better understanding of DOD and the process used to develop military plans. At the operational and tactical level, DOD officials agreed that, ideally, they need consistent access to interagency personnel from other government agencies that have been authorized by their organizations to establish coordinating relationships with the military. Specifically, European Command officials commented that they would benefit from subject matter experts from non-DOD organizations at the operational level who can (1) participate in the planning process and (2) increase the probability that planned contributions from non-DOD organizations in stability operations can actually be provided. Similarly, Pacific Command officials stated that to facilitate interagency coordination at the operational and tactical levels, several issues such as liaison authority, willingness on the part of other agencies to work with DOD, and coordinating mechanisms must be addressed. The department has also recognized that nongovernmental organizations should participate in DOD’s planning process, where appropriate. DOD has taken steps to establish interagency coordination mechanisms and to improve interagency participation in its planning efforts, but it has not achieved consistent interagency representation or participation at the strategic, operational, and tactical levels of planning. At the strategic level, DOD’s primary mechanism for interagency coordination within each combatant command is the Joint Interagency Coordination Group (JIACG). As shown in Table 2, the size and composition of these groups varied within each combatant command we visited, but in general, they have been comprised of a limited number of representatives from State, USAID, the Department of Treasury, the Drug Enforcement Agency, and the Federal Bureau of Investigation. The organization and functions of the JIACGs are evolving. At the time of our review, each JIACG we examined had an overall function to improve general coordination between DOD and the agencies represented in the group and were not intended to be actively involved in DOD’s planning efforts. At each command we visited, we found JIACG participants served primarily as advisors and liaisons between DOD and their parent organizations, had limited planning experience and training, and were not consistently engaged in DOD’s planning process. However, officials commented that the role of the JIACG was changing. Specifically, Central Command officials expected that the JIACG within their command would begin to assume a more active role in the planning process, but they did not have specific details on how or when this would occur. At the Pacific Command, the JIACG was being refocused by the commander from coordinating counterterrorism activities to more of a “full spectrum” approach that would include stability operations activities. At the European Command, officials also expected the focus of the JIACG would expand from a counterterrorism focus to a fuller spectrum of operations, which, in their opinion, could include participating in the planning process. Below the strategic level, at the operational and tactical levels, some service component commands are reaching out to country teams in embassies within their areas of responsibility on an ad hoc basis to obtain interagency perspectives during their planning efforts. But this approach can be cumbersome because of the large number of countries that may be affected by a regional plan. Generally, component command officials we contacted agreed that the primary mechanism available to them for interagency coordination was establishing personal relationships and direct dealings with country teams and other embassy personnel. For example, according to Naval Forces Europe, it is developing new contingency plans, and one of its first steps in this effort is to identify the key participants and resources available within its area of operations and to develop individual relationships that will help it accomplish more. In Central Command, both the Army and Navy component commands commented that they work directly with the embassies in the area of operations in order to interface with other agencies. Combatant Commanders have achieved limited interagency participation in the development of military plans because: (1) DOD has not provided specific guidance to commanders on how to integrate planning with non- DOD organizations, (2) DOD practices inhibit the appropriate sharing of planning information with non-DOD organizations, and (3) DOD and non- DOD organizations lack an understanding of each other’s planning processes and capabilities, and non-DOD organizations have limited capacity to fully engage in DOD’s planning efforts. At each combatant command we visited, planners acknowledged the requirement to include interagency considerations in planning, as required by recent DOD policy. But command officials stated they did not have any guidance on how to meet the requirement, or on the specific mechanisms that would facilitate interagency planning at the strategic, operational, and tactical levels. For example, numerous DOD publications and documents discuss the JIACG organizations at each combatant command, but there is no published DOD guidance that establishes policy governing the JIACGs or that outlines the responsibilities for establishing and managing them. Officials from the DOD and State also commented that the JIACG organizations were not intended to be a coordinating body for military planning, and questioned if this was an appropriate mechanism for integrating the planning efforts between DOD and other agencies. The second factor inhibiting interagency participation is that DOD does not have a process in place to facilitate the sharing of planning information with non-DOD agencies, when appropriate, early in the planning process without specific approval from the Secretary of Defense. Specifically, DOD policy officials, including the Deputy Assistant Secretary of Defense for Stability Operations, stated that it is the department’s policy not to share DOD contingency plans with agencies or offices outside of DOD unless directed to do so by the Secretary of Defense, who determines if they have a need to know. In addition, DOD’s planning policies and procedures state that a combatant commander, with Secretary of Defense approval, may present interagency aspects of his plan to the Joint Staff during the plan approval process for transmittal to the National Security Council for interagency staffing and plan development. This hierarchical approach limits interagency participation as plans are developed by the combatant commands at the strategic, operational, and tactical levels. State officials also told us that DOD’s current process for sharing planning information limits non-DOD participation in the development of military plans, and inviting interagency participation only after the plans have been formulated is a significant obstacle to achieving a unified government approach in those plans. In their opinion, it is critical to include interagency participation in the early stages of plan development at the combatant commands. Additionally, according to combatant command officials, non-DOD personnel do not always have the necessary security clearances required by DOD for access to the department’s planning documents or participation in planning sessions. In its recent interim report to the Secretary of Defense on DOD Directive 3000.05, DOD acknowledged the current challenges in information sharing and predicts that DOD will continue to face serious problems concerning the release and sharing of information among DOD, other U.S. government agencies, international partners, and other nongovernmental organizations. In the report DOD attributed information-sharing issues to restrictions based on current information-sharing policies and emphasized that to improve information- sharing capabilities senior leadership direction is required. The third factor limiting the effectiveness of interagency coordination efforts is that DOD and non-DOD organizations lack an understanding of each other’s planning processes and capabilities, and have different planning cultures and capacities. DOD and non-DOD officials repeatedly emphasized in their discussions with us the cultural and capacity challenges that the two communities face. Within DOD, officials discussed a lack of formally trained DOD planners within the combatant commands. For example, only two of the six planners at U.S. Army Europe were formally trained, and another official noted that it takes a planner about a year on the job to become proficient in what is generally a 2-year assignment. Even if combatant command planners are experienced, they may lack knowledge of interagency processes and capabilities. For example, a Pacific Command planner stated that they had to guess about interagency capabilities during planning. Senior Pacific Command officials cited a need to educate DOD planners on U.S. government agencies strengths and weaknesses and where expectations may exceed an agency’s capabilities. Similarly, European command JIACG officials commented that DOD needs to institutionalize the interagency education piece at its schools for professional planners, and a European Command planner stated that it is essential to understand what the various non-DOD agencies do and what they need to know about DOD capabilities. Our analysis of DOD’s lessons-learned databases from current and past military operations provided details that specifically addressed the training differences between DOD and non-DOD agencies and the limited knowledge of each other’s capabilities. For example, the databases can contain lessons learned such as: (1) DOD needs to develop knowledge of other agencies and the capabilities they bring to operations, (2) significant improvements could be made in military education by the development of interagency programs of instruction, and (3) DOD should work to aggressively include State in the process of project development. Furthermore, DOD officials described what they believe is a significant difference in the planning cultures of DOD and non-DOD organizations. They stated that DOD has a robust planning culture that includes extensive training programs, significant resources, dedicated personnel, and career positions. Conversely, officials from the Joint Staff, the Office of Policy, Joint Forces Command, and the combatant commands explained that many agencies outside of DOD do not appear to have a similar planning culture and do not appear to embrace the detailed planning approach taken by DOD. In addition, these officials repeatedly stated that their efforts to include non-DOD organizations in planning and exercise efforts has been stymied by the limited number of personnel those agencies have available to participate. DOD has attempted to mitigate some of these challenges by sharing its planning resources to projects such as the development of a draft joint planning concept with State, offering DOD personnel to provide training to non-DOD organizations, and encouraging non-DOD agencies to participate in exercise planning. We did not examine the planning capability and capacity of non-DOD organizations in this review, but we do have ongoing work that is examining this issue in more detail. The difference in planning between DOD and other U.S. departments and agencies was also highlighted in the first semiannual report to the Secretary of Defense on stability operations. In that report, the Undersecretary of Defense for Policy states, “The difference between DOD and other U.S. Departments and Agencies is that DOD plans and prepares for current and future operations and other U.S. Departments and Agencies plan and prepare for current operations. This is reflected in the different planning processes across the U.S. Government and the relative spending on training, education, and exercises.” Officials from State offered similar perspectives on the planning capabilities and capacities of non-DOD organizations. They stated that State planning is different from military planning, with State more focused on current operations, and less focused on the wide range of potential contingency operations that DOD is required to plan for. As a result, State does not allocate planning resources in the same way as DOD, and therefore does not have a large pool of planners that can be deployed to the combatant commands to engage in DOD’s planning process. These officials agreed, however, that participating in DOD’s planning efforts as plans are being formulated is necessary to achieve a unified government approach in the military plans, and suggested alternative methods to accomplish this goal. For example, State officials discussed a current initiative to test methods to “virtually” include State planners in a DOD contingency planning effort in the European Command using electronic communication tools, and stated that State personnel could potentially participate in a large number of planning efforts if this approach were expanded. State officials also suggested that DOD policies may need to be revised to authorize combatant commanders to reach back directly to State and other government agencies as plans are being developed, instead of through the hierarchical approach through the Joint Staff and the National Security Council as previously discussed. Without clear guidance to the combatant commanders on how to establish adequate mechanisms to facilitate and encourage interagency participation in planning at the strategic, operational, and tactical levels of planning, a process to share planning information as plans are being developed, and methods to orient and include professional planners from key organizations in DOD’s planning process, the contributions and capabilities of these organizations may not be fully integrated into DOD’s plans, and a unified government approach may not be achieved. DOD planners are not consistently using lessons learned from past operations as they develop future contingency plans. NSPD-44 and DOD policies highlight the importance of incorporating lessons learned into operational planning. Lessons learned from current and past operations are being captured and incorporated into various databases, but our analysis indicates that DOD planners are not using this information on a consistent basis as plans are revised or developed. Three factors contribute to this inconsistent use of lessons learned in planning: (1) DOD’s guidance for incorporating lessons learned into plans is outdated and does not specifically require planners to include lessons learned in the planning process, (2) accessing and searching lessons- learned databases is cumbersome, and (3) the planning review process does not evaluate the extent to which lessons learned are incorporated into specific plans. As a result, DOD is not fully utilizing the results of the lessons-learned systems and may repeat past mistakes. NSPD-44 and DOD guidance stress the importance of incorporating lessons learned into operations and planning. Furthermore, the recently released Joint Operating Concept for stability operations envisions that the Joint Force will implement a continuous learning process that incorporates lessons learned into ongoing and future operations through constant observation, assessment, application, and adaptation of tactics, techniques, and procedures. The Joint Operation Planning and Execution System manual, which provides planners with the step by step process for planning joint operations, states that a regular review of lessons information can alert planners to known pitfalls and successful and innovative ideas. Prior GAO work on DOD’s lessons learned noted that effective guidance and sharing of lessons-learned are key tools to institutionalize and facilitate efficient operations, and failure to utilize lessons heightens the risk of repeating past mistakes and being unable to build on the efficiencies others have developed during past operations. DOD has established comprehensive joint lessons learned programs at all levels within the department, and lessons learned from exercises and operations are being captured. The department’s Joint Lessons Learned Program is a federation of separate lessons-learned organizations embedded within the Joint Staff, combatant commands, the Services and Combat Support Agencies that focus upon capturing information, data, and lessons based upon each command’s priorities. Each lessons-learned organization within this program has developed its own processes, systems, and information products for capturing, storing, and retrieving lessons and observations based upon each organization’s requirements and resources. The various organizations in the Joint Lessons Learned Program focus on capturing lessons learned at the strategic, operational, and tactical levels. These lessons tend to be oriented toward a specific customer and are disseminated through a variety of different products. For example, the services tend to collect tactical- and operational-level lessons that they use to address command and service-specific issues for resolution. Similarly, the combatant commands have each developed their own theater-specific command-level lessons programs related to joint, interagency, and multinational matters and other matters involving interoperability. In addition, each organization tailors its lessons-learned programs to meet the individual command’s requirements and available resources. For example, the U.S. Pacific Command’s program is: managed by one civilian contractor; focuses it efforts on issues at the senior command leadership level; and hosts a web-based repository that contains approximately 145 lessons documents. In contrast, the Center for Army Lessons Learned has 179 people on staff; focuses on all levels within the Army from the individual soldier to the most senior leaders; uses a combination of active collection techniques, such as sending out teams to interview soldiers and observe operations; and has an electronic repository consisting of approximately 157,000 documents. Our lessons-learned analysis provides insights into the types of lessons available to DOD planners and the volume of information that could be useful to improve future stability operations planning. We grouped 1,074 lessons into 14 themes that reflect the full spectrum of strategic-level issues surrounding stability operations, such as cultural sensitivity, language skills, intelligence, communications systems, and reconstruction activities. For example, the information in one theme we developed related to DOD coordination and planning with other U.S. agencies and non-U.S. government organizations highlights issues such as the need for (1) the military to work more closely with other agencies during stability operations, (2) DOD to develop knowledge of other agencies and the capabilities they can contribute, and (3) commanders to ensure that military sectors during operations correspond with civil geopolitical boundaries. The information in another of our themes discussing civil military operations highlights issues such as steps needed to improve information operations, and how to address cultural differences during information operations to reach specific audiences. A comprehensive listing of our themes and an explanation for each can be found in appendix II. Despite the robust lessons-learned gathering process in place, we found that DOD planners at the combatant and component commands in our review did not consistently incorporate lessons as plans were developed or revised. For example, two of the combatant commands that we visited stated that they did not routinely use lessons as plans were developed. Similarly, we found a range of how lessons learned were used in the planning process at the component commands we visited. For example, one Central Command component stated that lessons learned were part of the component command’s planning process, but a Pacific Command component commented that it generally did not utilize lessons learned as it developed plans. When we discussed the limited use of lessons learned with officials from the Office of the Undersecretary of Defense for Policy, they stated that planners are generally aware of the need to check lessons learned as they develop plans. However, the officials acknowledged that there are barriers to the use of lessons learned, that the existing lessons learned systems need updating, and questions do exist on whether the information provided by the current systems is adequate. One official noted that Office of Policy is developing a new Center for Complex Operations, which is envisioned to facilitate the use of lessons by acting as a clearinghouse for stability operations information. The Center is still in the planning phase, and we were told that funding has been requested in the fiscal year 2007 supplemental budget request and in the fiscal year 2008 budget to implement the plan. We identified three factors that contribute to this inconsistent use of lessons learned in the planning process. First, the guidance regarding lessons learned in the Joint Staff’s manual for planning is outdated–the relevant section of the manual has not been updated since July 2000 and does not specifically require planners to include lessons learned in the planning process. Specifically, this guidance states that the Joint Universal Lessons Learned System should be contacted early in the planning process and periodically thereafter to obtain specific practical lessons in all areas of planning and execution based on actual operation and exercise occurrences. However, this system does not exist and has not been supported since 1997, nor does the update reference an existing system that planners can access for joint lessons learned. The second factor contributing to limited use of lessons learned in the planning process is that accessing and searching lessons-learned databases is cumbersome. For example, to conduct our analysis of DOD lessons learned, we used five databases—four managed by each of the services, and one managed by the Joint Center for Operational Analysis. To obtain lessons-learned information from these sources, we had to: separately access each database, become familiar with each system’s functionality and search engines; repeat the same searches in each site for stability operations–related terms; and review the results to find relevant lessons. However, knowing how to navigate and search each of the lessons-learned systems is not enough. We also had to familiarize ourselves with and sort through the multitude of products generated to find lessons that were applicable to our analysis. Planners we contacted also told us they considered the databases difficult and time-consuming to use. One combatant command official described the magnitude of the challenge by noting that there is so much information within the program that the biggest difficulty is turning the information into usable knowledge. Additionally, the Joint Staff has acknowledged that the current system is inefficient and of limited effectiveness in sharing lessons-learned data. In an effort to address these issues, DOD has recently initiated an effort to develop a Joint Lessons Learned Information System, which is intended to standardize the collection, management, dissemination and tracking of observations and lessons. The department is in the early stages of developing this system, and plans that the system will establish interoperable lessons databases that can be searched with an easy-to-use search engine. The Joint Lessons Learned Information System development strategy includes non-DOD agencies, and eventually non-U.S. partners. However, while Joint Staff officials recognize the need for stakeholder input to avoid continued inefficiency and limited effectiveness in sharing lessons learned, they do not plan to include non-DOD organizations until the later stages of the program’s development. The third factor affecting the use of lessons learned is that the planning review process does not evaluate the extent to which lessons learned are incorporated into specific plans. During discussions with planners at the various commands, we found no evidence of a formal mechanism to verify that lessons were considered in plan development. Furthermore we found conflicting views as to the need for a formal requirement. For example, one combatant command planner believed that, despite the lack of a formal mechanism, the command’s vetting process for plans ensured that lessons would be incorporated, while at another combatant command a planner stated that mechanisms for ensuring that lessons are used in planning is broken because there is no formal requirement to utilize lessons in plan development. DOD has invested substantial resources to develop systems that capture lessons from exercises, experiments, and operations, with the intent of using these lessons to improve efficiency. However, in the case of planning, the department has not developed mechanisms to ensure that they are taking advantage of this resource. As a result, DOD heightens its risk of either repeating past mistakes or being unable to build on the efficiencies developed during past operations as it plans for future operations. The DOD has a critical role in supporting a new national policy to improve stability operations capabilities and to achieve a more unified governmentwide approach to this demanding and important mission. Recent operations in Bosnia and Kosovo, along with current operations in Afghanistan and Iraq provide daily reminders of how complex and difficult these missions are. The department has developed an approach to improve its ability to execute stability operations, but it has achieved limited progress in two key areas—identifying needed capabilities, and developing measures of effectiveness—that are critical to successfully executing this approach. Without clear guidance on how and when combatant commanders are to develop stability operations capability requirements, the capabilities needed to conduct stability operations may not be fully developed or current service efforts to enhance capabilities may not be addressing the most critical needs of the commanders. Similarly, without clear departmentwide guidance on how to develop measures of effectiveness and milestones for completing them, confusion may continue to exist within the department, and progress on this important management tool may be significantly hindered. DOD has recognized the need to achieve greater interagency participation in the development of military plans, but it has not established an effective mechanism to accomplish this goal. A governmentwide approach to stability operations is dependent upon an integrated planning effort of all organizations involved in them. Integrated planning can help fully leverage the capabilities, contributions, and capacity of each organization, and increase the potential for successful operations. The challenge now facing the department is how to modify its planning approach to better integrate non-DOD organizations into all levels—strategic, operational, and tactical—of planning and to support State as the lead agency in stability operations planning. Without improved guidance to military commanders on the mechanisms that are needed to facilitate interagency planning, an approach to appropriately share planning information with non-DOD organizations as plans are developed, and steps for overcoming differences in planning culture and training and capacities among the affected agencies, integrated interagency planning for stability operations may continue to be stymied. The consideration of lessons learned from past operations as new plans are developed is not only a requirement stipulated by new stability operations guidance, it is a requisite step to reducing the potential that past mistakes will be repeated in future operations. Without clear and complete guidance for planners, steps to increase the potential that information system improvements will facilitate sharing of lessons learned both within DOD and between all organizations that will participate in planning for stability operations, and a focus on lessons learned as plans are reviewed, the potential gains that can be achieved through systematic consideration of lessons learned as future plans are developed may not be realized. To meet the goals of identifying and developing stability operations capabilities and for developing tools to evaluate progress in achieving these goals, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Policy to take the following two actions: Provide comprehensive guidance, including a clear methodology and time frames for completion, to the combatant commanders and the services on how to identify and address stability operations capability gaps. Provide comprehensive guidance to DOD organizations on how to develop measures of effectiveness as directed by DOD Directive 3000.05, including those measures related to identifying and developing stability operations capabilities. To achieve greater interagency participation in the development of military plans that include stability operations, and increase the potential for unity of effort as those operations are executed, we recommend that the Secretary of Defense in coordination with the Secretary of State take the following three actions: Provide specific implementation guidance to combatant and component commanders on the mechanisms that are needed to facilitate and encourage interagency participation in the development of military plans that include stability operations–related activities. Develop a process to share planning information with the interagency representatives early in the planning process. Develop an approach to overcome differences in planning culture, training, and capacities among the affected agencies. To more fully incorporate lessons learned in the planning process, we recommend the Secretary of Defense direct the Chairman of the Joint Chief’s of Staff working with Under Secretary of Defense for Policy to take the following actions: Update the current planning guidance to direct military planners to include lessons learned as they develop plans, and require that the plan review process include a step to verify that lessons learned have been considered and adopted as appropriate. Include non-DOD stakeholders in the development of the Joint Lessons Learned Information System at an earlier point than currently planned. Because it is unclear what specific steps, if any, DOD plans to take to implement our recommendations, the Congress should consider requiring the Secretary of Defense to develop an action plan and report annually to the Senate Committee on Armed Services and the House Committee on Armed Services on the specific steps being taken and current status of its efforts to (1) identify and prioritize needed stability operations capabilities, (2) develop measures of effectiveness to evaluate progress in achieving these capabilities, (3) achieve greater interagency participation in the development of military plans, and (4) fully incorporate lessons learned in the planning process. The Secretary’s report should also identify challenges to achieving an integrated, interagency approach to stability operations, and potential solutions for mitigating those challenges. In written comments on a draft of this report, DOD partially agreed with our eight recommendations but did not discuss what specific steps, if any, it plans to take to implement our recommendations. (DOD’s comments appear in their entirety in app. III.) State was also afforded an opportunity to comment on this report, but declined to do so. In its written comments, DOD highlighted traditional DOD methodologies and approaches to developing capabilities, measures of effectiveness, coordinating with other agencies and incorporating lessons learned that it believes are adequate to address our recommendations. Although DOD is making progress in achieving a greater focus on stability operations through its new directive, our report notes it has made limited progress in certain areas, such as establishing measures of effectiveness, due to the limited guidance provided to DOD components. As a result, we continue to believe our recommendations are warranted and that DOD should take specific steps to address them. Because it is unclear what specific steps, if any, DOD plans to take to implement our recommendations, we have added a matter for congressional consideration suggesting that the Congress require the Secretary of Defense to develop an action plan and report annually on the specific steps being taken to address our recommendations and the current status of its efforts. The report should also identify challenges to achieving an integrated interagency approach to stability operations, and potential solutions for mitigating those challenges. DOD provided three overall comments to the report. First, DOD commented that GAO began the field work for this report in October 2005, one month prior to the issuance of DOD Directive 3000.05, and observed that much of our field work was therefore conducted prior to activities DOD undertook to implement the directive. The department is mistaken in this observation. In October, 2005, we held our entrance conference with DOD officials, but conducted the majority of our field work from January 2006 through March 2007. We believe the timing of our field work enabled us to focus on the approach DOD was taking to implement the directive, observe how key organizations began implementing this approach over a 1-year period, and highlight impediments that may impair DOD’s ability to achieve the results intended by the directive—improved stability operations capabilities. Therefore, we believe our work and related recommendations are particularly relevant and important because they address systemic issues associated with DOD’s approach and could assist DOD organizations tasked with implementing the new directive. Second, DOD commented that our report is directed exclusively at DOD; that stability, security, transition, and reconstruction activities are inherently interagency in nature; and that DOD can only implement recommendations under its purview. While we agree that stability operations are inherently interagency in nature, we disagree that our work is focused exclusively on DOD. Specifically, our audit work included discussions with State and USAID officials in Washington, D.C., and at each of the combatant commands included in our review to gain their views and perspectives. We have also included recommendations to improve interagency participation in the development of military plans that are directed to the Secretary of Defense because the military planning process is conducted under the purview of the Secretary of Defense. However, acknowledging that interagency participation in DOD planning cannot be forced, we are recommending the Secretary of Defense coordinate with the Secretary of State to implement these recommendations. Furthermore, as we discussed with DOD officials during the course of our review and stated in this report, we have other work underway to evaluate State’s efforts to lead and coordinate stabilization operations in conjunction with other U.S. agencies, and plan to report on those issues separately. Third, DOD commented that the identification and development of stability, security, transition, and reconstruction operations capabilities are not so different from other DOD capabilities that they require a new or separate methodology to identify and develop military capabilities and plans. We disagree. As we discuss in this report, DOD has made limited progress in identifying and prioritizing needed capabilities, the identification of stability operations requirements was occurring in a fragmented manner, and each service is using a different approach to improve stability operations capabilities. To date, the Under Secretary of Defense for Policy has not identified and prioritized needed stability operations capabilities and military plans do not fully reflect an integrated, interagency approach to stability operations. Therefore, we continue to believe that our recommendations in these areas are still warranted, as discussed below. Regarding our recommendation that DOD provide comprehensive guidance, including a clear methodology and time frames for completion, to combatant commanders and the services on how to identify and address stability operations capability gaps, DOD stated that existing, mandated capability assessment methodologies already effectively address stability, security, transition, and reconstruction operations capability needs at the combatant commands and the services. It also stated that under this process, the combatant commands assess and communicate to DOD the capabilities required to conduct these missions just as they do for other assigned missions. However, as discussed in this report, we found that the combatant commands included in our review had made limited progress in identifying stability operations requirements because DOD had not issued guidance or set specific time frames to complete this task, and there was confusion over how to define stability operations. During the course of our work, DOD refined the definition of stability operations, which was a positive step, but has not clarified the guidance or set specific time frames for identifying stability operations requirements. Because combatant command officials indicated to us that the absence of guidance and timeframes was a significant contributor to the lack of progress in developing requirements, we believe our recommendation would assist the department in accomplishing this task. In response to our recommendation that DOD provide comprehensive guidance to DOD organizations on how to develop measures of effectiveness, the department stated that it already develops measures of effectiveness in general, and a special process is not needed for stability operations. We believe this response is not consistent with DOD Directive 3000.05, which requires each organization tasked under the directive to develop measures of effectiveness that evaluate progress in meeting their respective goals listed in the directive. In addition, as discussed in this report, and as acknowledged by officials from the Office of the Under Secretary of Defense (Policy) in a progress report to the Secretary of Defense, the department has made limited progress in developing measures of effectiveness related to stability operations. We found this limited progress was caused by significant confusion over how this task should be accomplished, and because minimal guidance was provided by the office of Policy. The department recognizes this confusion exits, and as discussed in this report plans to establish workshops to assist organizations in these efforts. We believe this is a positive step that should be complemented with improved guidance that would be available to all organizations tasked with this responsibility, and therefore continue to believe our recommendation is appropriate and necessary. In response to our recommendations that DOD coordinate with State and provide specific implementation guidance to the combatant and component commanders on the mechanisms needed to facilitate and encourage interagency participation in the development of military plans, and that the two departments develop a process to share planning information, DOD provided the same response to both recommendations. The department believes that National Security Presidential Directive 44 should, by itself, provide sufficient direction on the structures needed and a process to share planning information. The department also stated it would continue to include other agencies in planning and exercising for stability operations. We believe the department’s response is inadequate because NSPD-44 is a high-level directive that sets forth goals for improved interagency participation in stability operations, but does not contain details on mechanisms to achieve those goals. During the course of our review we received consistent comments from DOD and State officials that it is clear interagency participation in DOD planning is needed, but it is very unclear to as to how to accomplish this goal. Therefore, as detailed in this report, we found that interagency participation in the development of military plans at the strategic, operational, and tactical levels was very limited in every command included in our review in part because DOD’s guidance did not provide details on how to engage relevant agencies in planning or on the specific mechanisms that would facilitate interagency planning, and because DOD practices inhibit the appropriate sharing of planning information. Combatant command officials cited significant limitations in current coordinating groups, and various ad hoc methods were in place to gain interagency perspectives on DOD planning efforts. State officials were concerned that DOD practices limit the appropriate sharing of DOD planning information as plans are developed, and it therefore had minimal impact as plans are being constructed. These fundamental and systemic issues will not be resolved with the guidance provided by NSPD-44. We continue to believe that systemic solutions are needed and can be achieved with improved guidance and more effective processes to appropriately share planning information with interagency representatives. In response to our recommendation that DOD, in coordination with State, develop an approach to overcome differences in planning culture, training, and capacities among the affected agencies, DOD stated that it will continue to work to understand and accommodate differences in these areas, offer non-DOD organizations opportunities to participate in DOD training courses, and detail DOD personnel to other agencies. We believe these are positive steps and agree DOD should continue to pursue them. However, our work indicates that these measures are not adequate to fully address the magnitude of differences in the planning culture and capacity between DOD and other agencies. As discussed in this report, State officials believe that new and innovative practices need to be identified and pursued, such as “virtual” collaborative planning between DOD and State. Therefore, we continue to believe that our recommendation for DOD and State to work together to develop more comprehensive and innovative solutions to overcome these differences is an important and necessary step to take. In response to our recommendations that DOD update its current planning guidance to direct military planners to include lessons learned as they develop plans, and to update current planning guidance to require that the plan review process include a step to verify that lessons learned have been considered and adopted as appropriate, DOD stated that the current planning methodology takes into account lessons learned when constructing or modifying a plan. As discussed in our report, this is not always the case. In the course of our field work, we found sporadic use of lessons learned in the planning process and a lack of formal guidance directing consideration of lessons learned in both constructing and in reviewing plans. According to DOD, taking lessons learned into account during planning is at the heart of all effective military (or nonmilitary) planning. However, the Joint Staff's manual on the Joint Operation Planning and Execution System encourages, but does not direct planners to review lessons learned as they develop plans. We agree that lessons learned are being used by planners, but inconsistently. As a result we believe that our recommendations should be implemented in order to increase the potential that lessons are actually incorporated into plans as appropriate. In response to our recommendation that DOD include non-DOD stakeholders in the development of the Joint Lessons Learned Information System at an earlier point than currently planned, DOD agreed to invite stakeholders to participate in the system at an earlier stage, but expressed concerns that these stakeholders face shortfalls in capacity and resources and therefore cannot ensure their interactive participation. We believe this is a positive step and responsive to our recommendation. We are sending copies of this report to the Chairmen and Ranking Minority Members, Subcommittee on National Security and Foreign Affairs, Committee on Oversight and Government Reform. We are also sending a copy to the Secretary of Defense, the Secretary of State, the Office of the Assistant Secretary of Defense for Special Operations and Low Intensity Conflict, the Office of the Joint Chiefs of Staff, and officials in the U.S. European Command, U.S. Central Command, and U.S. Pacific Command. We will also make copies available to other interested parties upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-4402 or by e-mail at stlaurentj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To evaluate the Department of Defense’s approach to improving stability operations and DOD’s identification of stability operations capabilities and development of performance measures we obtained and analyzed DOD Directive 3000.05, National Security Presidential Directive 44, the Quadrennial Defense Review, the Building Partnership Capacity Roadmap, the Military Support to Stabilization, Security, Transition, and Reconstruction Operation Joint Operating Concept, and the Defense Science Board studies on Institutionalizing Stability Operations within DOD. We interviewed current and former officials at the Office of the Under Secretary of Defense for Policy, the Joint Staff and Services, three Regional Combatant Commands (European Command, Pacific Command, and Central Command), and U.S. Joint Forces Command. In these interviews we reviewed relevant information and discussed implementing guidance for completing responsibilities outlined in the Directive, the interviewees’ understanding of their roles and responsibilities in completing assigned tasks, progress in implementing the Directive, challenges that have been encountered, and input provided for the first report to the Secretary of Defense on implementing the Directive. Finally, we reviewed the first report to the Secretary of Defense and discussed the report’s findings with officials within the Office of the Under Secretary for Policy. To identify the extent to which DOD is planning for stability operations and whether the department’s planning mechanisms encourage and facilitate consideration of non-DOD capabilities, we reviewed and analyzed NSPD-44, DOD Directive 3000.05, joint planning guidance and manuals, the Quadrennial Defense Review, the Building Partnership Capacity Roadmap, and combatant command processes. We interviewed officials at the Department of State’s Office of the Coordinator for Reconstruction and Stabilization, the Bureau of Political Military Affairs, and the United States Agency for International Development to obtain other agencies’ perspectives regarding DOD’s planning process and the inclusion of non-DOD perspectives in contingency plans. To understand DOD’s planning process, mechanisms for interagency involvement in planning, and impediments to interagency coordination, we met with representatives from the Office of the Under Secretary of Defense for Policy as well as planners from three regional combatant commands, which included the Pacific, European, and Central commands, members of each combatant command’s Joint Interagency Coordination Group, and fourteen combatant command component commands responsible for contingency operation planning. We also reviewed examples of interagency coordination contingency planning documents to gain an understanding of the level of detail to which the commands planned coordination efforts. We did not, however, assess the extent to which these roles and responsibilities, including those of DOD, are appropriate. Our review did not include the planning for ongoing operations in Iraq and Afghanistan. DOD’s contingency plans are classified documents and a complete review of the contingency plans was beyond the scope of this audit, and as a result we did not develop a comprehensive list of documents to draw a representative sample of contingency planning documents related to interagency coordination. However, we worked with combatant command officials to identify examples of planning documents related to interagency coordination and the level of detail to which the commands planned coordination with other agencies. We did not include in our review any current or planned coordination between DOD and non- U.S.-government organizations, foreign governments, or international organizations. To determine the extent to which DOD planners are applying lessons learned from past operations and exercises we reviewed relevant DOD guidance, and discussed with DOD officials their consideration of lessons learned during planning. In order to understand the requirements for utilizing lessons learned in the planning process and the purpose and scope of the Joint Lessons Learned Program, we analyzed DOD’s planning guidance and manuals, lessons learned instructions for the Joint Lessons Learned Program, and the services’ lessons learned guidance. To assess the type and extent of strategic stability operations lessons learned available, we identified organizations that produced studies or reports that included lessons learned relevant to stability operations, both within and outside DOD. To identify strategic level lessons within DOD’s Joint Lessons Learned Program, we obtained access to the four armed services lessons learned databases (Army, Navy, Marine Corps, and Air Force), the Joint Center for Operational Analysis, and obtained stability operations studies from the Defense Science Board. In order to identify relevant non-DOD organizations conducting lessons-learned research, we contacted individuals identified as subject matter experts in stability operations and asked them to identify non-DOD agencies that published reports and studies regarding stability operations that they recognized as being leaders in the field. In this manner, several non-DOD organizations were identified, including the Center for Strategic and International Studies and the United States Institute of Peace. After obtaining search results from the DOD lessons-learned databases and non-DOD organizations, we reviewed the materials and selected analytical products for further examination based upon whether the report or study included original data collection and analysis related to the conduct of stability and reconstruction in Operations Enduring Freedom, Iraqi Freedom, or the operations of the Joint Task Force–Horn of Africa. We also excluded reports and analysis focused primarily on combat operations, including tactics, techniques, and procedures, after action-reports, and handbooks. We reviewed over 200 reports or studies, and found 38 documents that met these criteria. We entered all of the individual lessons and observations from the 38 reports into a database resulting in over 3,500 individual lessons and observations. Two GAO analysts independently reviewed the individual lessons and observations using the following criteria for inclusion. We included lessons related to: U.S. forces performing or supporting local governance functions in areas such as health care, utilities, infrastructure, and law enforcement; and U.S. forces interacting with local civil authorities to enhance the viability of these authorities and strengthen their capacity to provide basic services to the local population. Following the independent review, the team compared their individual results and, when agreement between the independent reviewers could not be reached, a third independent reviewer decided upon the inclusion or exclusion of the lesson. This analysis resulted in 1,074 individual lessons that met GAO’s criteria, which we reviewed for commonalities from which we developed our 14 major themes. After developing the themes, we categorized each lesson or observation, by consensus, into one or more categories based upon the content of the lesson. We used these themes and our knowledge of the lessons-learned systems and guidance as a basis for discussions with combatant command and component command planners regarding the use of lessons learned in the planning process. We recognize that this analysis is not based upon an exhaustive review of all reports and studies on the subject of stability operations. We conducted our review from October 2005 through March 2007 in accordance with generally accepted government auditing standards. Listed below are the 14 major themes that we developed after reviewing and categorizing 1,074 lessons learned. We used our analysis to provide insight into the types of stability operations lessons available to planners and to facilitate our discussions with Department of Defense. Our coding methodology often resulted in a lesson falling into one or more categories based upon the content of the lesson. Furthermore, several categories, such as Civil Military Operations and Provisional Reconstruction Teams, were considered to be functional categories, or topical areas, and the lessons were often included in another theme. The first column lists the theme GAO developed. The second column provides a general description of the types of lessons included within the theme. The third column lists the total number of lessons coded into each theme. Our analytical methodology was developed to support an insight as to the types of lessons available and does not does imply a ranking of themes in terms of importance or critical needs. A detailed discussion of our methodology is included in appendix I. Cultural sensitivity, awareness as it pertains to U.S.-to-host nation and host nation-to-U.S. engagement before and during deployments. Training of U.S. forces and the use of interpreters. Functional category related to lessons concerning psychological operations, civil affairs, public affairs, and information operations, which were viewed as included within civil military operations. (Lessons in this category are often included with one of the other themes that talk to a more specific issue.) Processes and products, including: intelligence preparation of the battlespace; operational security; counterintelligence; human intelligence. Planning and coordination related to nonmilitary activities with other U.S. agencies, non-U.S.-government organizations, and host nation governments. While deployed, temporary changes in the primary role of U.S. forces to meet immediate or unanticipated operational needs. For example, transition and reconstruction activities. Includes providing for the care, feeding, and security of military and U.S. government or coalition civilian forces. Addresses the question of who is in charge and how is the authority of command being used. Examples include Corps of Engineers and contracted construction. Transfers of authority/responsibility of activities to host nation; election support. Capability, capacity, and compatibility of U.S. military communication and information systems in the theater of operation. U.S., coalition, and host nation military coordination, planning, and capacity. Instances showing how units are working together. This category addresses military-to-military. Military personnel authorization issues. Are units staffed with enough personnel in the right grade with the right skills and military specialties all the time, temporarily, or not at all? What is being done to prepare before a unit needs to deploy. Includes: issues of doctrine, training, and logistics; and lessons learned that will result in changes to training and logistics to prepare for future operations. Provisional Reconstruction Teams Functional category related to lessons concerning Provisional Reconstruction Teams. (Lessons in this category are often included with one of the other themes that talk to a more specific issue.) In addition to the contact named above, Robert L. Repasky, Assistant Director; T. Burke; Stephen Faherty; Susan Ditto; Ron La Due Lake; Kate Lenane; Jonathan Carver; Maria-Alaina Rambus; and Christopher Banks made key contributions to this report. Operation Iraqi Freedom: DOD Should Apply Lessons Learned Concerning the Need for Security over Conventional Munitions Storage Sites to Future Operations Planning. GAO-07-639T. Washington, D.C.: March 22, 2007. Operation Iraqi Freedom: DOD Should Apply Lessons Learned Concerning the Need for Security over Conventional Munitions Storage Sites to Future Operations Planning. GAO-07-444. Washington, D.C.: March 22, 2007. Rebuilding Iraq: Reconstruction Progress Hindered by Contracting, Security, and Capacity Challenges. GAO-07-426T. Washington, D.C.: February 15, 2007. Securing, Stabilizing, and Rebuilding Iraq. GAO-07-308SP. Washington, D.C.: January 9, 2007. Rebuilding Iraq: Enhancing Security, Measuring Program Results, and Maintaining Infrastructure Are Necessary to Make Significant and Sustainable Progress. GAO-06-179T. Washington, D.C.: October 18, 2006. Rebuilding Iraq: Governance, Security, Reconstruction, and Financing Challenges. GAO-06-697T. Washington, D.C.: April 25, 2006. Rebuilding Iraq: Stabilization, Reconstruction, and Financing Challenges. GAO-06-428T. Washington, D.C.: February 8, 2006. Afghanistan Reconstruction: Despite Some Progress, Deteriorating Security and Other Obstacles Continue to Threaten Achievement of U.S. Goals. GAO-05-742. Washington, D.C.: July 28, 2005. Military Transformation: Clear Leadership, Accountability, and Management Tools Are Needed to Enhance DOD’s Efforts to Transform Military Capabilities. GAO-05-70. Washington, D.C.: December 16, 2004. Rebuilding Iraq: Resource, Security, Governance, Essential Services, and Oversight Issues. GAO-04-902R. Washington, D.C.: June 28, 2004. Afghanistan Reconstruction: Deteriorating Security and Limited Resources Have Impeded Progress; Improvements in U.S. Strategy Needed. GAO-04-403. Washington, D.C.: June 2, 2004. Rebuilding Iraq. GAO-03-792R. Washington, D.C.: May 15, 2003.
Since the end of the Cold War, the United States has frequently been involved in stability and/or reconstruction operations that typically last 5 to 8 years and surpass combat operations in the cost of human lives and dollars. A 2005 presidential directive requires DOD and State to integrate stability activities with military contingency plans. GAO was asked to address (1) DOD's approach to enhance stability operations capabilities, and challenges that have emerged in implementing its approach; (2) DOD planning for stability operations and the extent of interagency involvement; and (3) the extent to which DOD is applying lessons learned in future plans. To address these issues, GAO assessed DOD policy and planning documents, reviewed planning efforts at three combatant commands, and evaluated DOD's use of lessons learned. GAO is also conducting a related study of the Department of State's efforts to lead and coordinate stability operations. DOD has taken several steps to improve planning for stability operations, but faces challenges in developing capabilities and measures of effectiveness, integrating the contributions of non-DOD agencies into military contingency plans, and incorporating lessons learned into future plans. These challenges may hinder DOD's ability to develop sound plans. Since November 2005, the department issued a new policy, expanded its military planning guidance, and defined a joint operating concept to help guide DOD planning for the next 15-20 years. These steps reflect a fundamental shift in DOD's policy because they elevate stability operations as a core mission comparable to combat operations and emphasize that military and civilian efforts must be integrated. However, DOD has yet to identify and prioritize the full range of capabilities needed for stability operations because DOD has not provided clear guidance on how and when to accomplish this task. As a result, the services are pursuing initiatives to address capability shortfalls that may not reflect the comprehensive set of capabilities that will be needed by combatant commanders to effectively accomplish stability operations in the future. Similarly, DOD has made limited progress in developing measures of effectiveness because of weaknesses in DOD's guidance. DOD is taking steps to develop more comprehensive military plans related to stability operations, but it has not established adequate mechanisms to facilitate and encourage interagency participation in its planning efforts. At the combatant commands, DOD has established working groups with representatives from several key organizations, but these groups and other outreach efforts by the commanders have had limited effect. Three factors cause this limited and inconsistent interagency participation in DOD's planning process: (1) DOD has not provided specific guidance to commanders on how to integrate planning with non-DOD organizations, (2) DOD practices inhibit sharing of planning information, and (3) DOD and non-DOD organizations lack a full understanding of each other's planning processes, and non-DOD organizations have had a limited capacity to participate in DOD's full range of planning activities. Although DOD collects lessons learned from past operations, planners are not consistently using this information as they develop future contingency plans. At all levels within the department, GAO found that information from current and past operations are being captured and incorporated into various databases. However, planners are not consistently using this information because (1) DOD's guidance for incorporating lessons into its plans is outdated and does not specifically require planners to take this step, (2) accessing lessons-learned databases is cumbersome, and (3) the review process does not evaluate the extent to which lessons learned are incorporated into specific plans.
Medicare is typically the primary source of health insurance coverage for seniors. Individuals who are eligible for Medicare automatically receive Hospital Insurance, known as part A, which helps pay for inpatient hospital care, skilled nursing facility services following a hospital stay, hospice care, and certain home health care services. Beneficiaries generally pay no premium for this coverage but are liable for required deductibles, coinsurance, and copayments. Medicare-eligible beneficiaries may elect to purchase Supplemental Medical Insurance, known as part B, which helps pay for selected physician, outpatient hospital, laboratory, and other services. Beneficiaries must pay a premium for part B coverage, currently $50 per month. Beneficiaries are also responsible for part B deductibles, coinsurance, and copayments. See table 1 for a summary of Medicare’s beneficiary cost-sharing requirements for 2001. To help pay for some of Medicare’s cost-sharing requirements as well as some benefits not covered by Medicare parts A or B, most Medicare beneficiaries have some type of supplemental coverage. Privately purchased Medigap is an important source of this supplemental coverage. Other supplemental coverage options may include coverage through an employer, enrolling in a Medicare+Choice plan that typically offers lower cost-sharing requirements and additional benefits such as prescription drug coverage in exchange for a restricted choice of providers, or assistance from Medicaid, the federal-state health financing program for low-income individuals, including aged or disabled individuals. The Omnibus Budget Reconciliation Act (OBRA) of 1990 required that Medigap plans be standardized in as many as 10 different benefit packages offering varying levels of supplemental coverage. All policies sold since July 1992 (except in three exempted states) have conformed to 1 of these 10 standardized benefit packages, known as plans A through J. (See table 2.) In addition, beneficiaries may purchase Medicare Select, a type of Medigap policy that generally costs less in exchange for a limited choice of providers. A high-deductible option is also available for plans F and J. Policies sold prior to July 1992 are not required to comply with these 10 standard packages. Insurers in Massachusetts, Minnesota, and Wisconsin are exempt from offering these standardized plans because these states standardized their Medigap policies prior to the establishment of the federal standardized plans. Medigap coverage is widely available to most beneficiaries. Federal law provides Medicare beneficiaries with guaranteed access to Medigap policies offered in their state of residence during an initial 6-month open- enrollment period, which begins on the first day of the month in which an individual is 65 or older and is enrolled in Medicare Part B. During this initial open-enrollment period, an insurer cannot deny Medigap coverage for any plan types they sell to eligible individuals, place conditions on the policies, or charge a higher price because of past or present health problems. Additional federal Medigap protections include “guaranteed- issue” rights, which provide beneficiaries over age 65 access to plans A, B, C, or F in certain circumstances, such as when their employer terminates retiree health benefits or their Medicare+Choice plan leaves the program or stops serving their area. Depending on laws in the state where applicants reside and their health status, insurers may choose to offer more than these four plans. Federal law also allows individuals who join a Medicare+Choice plan when they first become eligible for Medicare and who leave the plan within 1 year of joining to purchase any of the 10 standardized Medigap plans sold in their respective states. In 1999, about 10.7 million Medicare beneficiaries—more than one-fourth of all beneficiaries—had a Medigap policy to help cover Medicare’s cost- sharing requirements as well as some benefits not covered by Medicare parts A or B. Of those having Medigap coverage in 1999, about 61 percent purchased 1 of the 10 standardized plans (A through J), while about 35 percent had supplemental plans that predate standardization. The remaining 4 percent had Medigap plans meeting state standards in the three states—Massachusetts, Minnesota, and Wisconsin—in which insurers are exempt from offering the federally standardized plans. Among the 10 standardized plans, over 60 percent of purchasers were enrolled in two mid-level plans (C or F), which cover part A and part B cost-sharing requirements but do not cover prescription drugs. There are several reasons why these plans may be particularly popular among beneficiaries. For example, both plans cover the part B deductible, which insurers report is a popular benefit for purchasers. They also represent two of the four plans that insurers are required to guarantee issue during special enrollment periods. With the exception of plan B, in which 13 percent were enrolled, less than 7 percent of beneficiaries selected any one of the remaining seven plans. (See fig. 1.) Enrollment in the three plans with prescription drug coverage—H, I, and J—is relatively low (a total of 8 percent of standardized plan enrollment) for several reasons. Insurance representatives noted that the drug coverage included in these plans is limited while the premium costs are higher than plans without this coverage. For example, under the Medigap plan with the most comprehensive drug coverage (plan J), a beneficiary would have to incur $6,250 in prescription drug costs to receive the full $3,000 benefit because of the benefit’s deductible and coinsurance requirements. Moreover, insurers often medically underwrite these plans—that is, screen for health status—for beneficiaries enrolling outside of their open-enrollment period. Thus, individuals in poor health who want to purchase a plan with drug coverage may be denied or charged a higher premium. Further, insurers may be reluctant to market Medigap plans with prescription drug coverage because they would be required to offer them to any applicant regardless of health status during beneficiaries’ initial 6- month open-enrollment period, according to NAIC officials. Finally, an insurance representative attributed low enrollment in these plans to beneficiaries who do not anticipate a need for a prescription drug benefit at the time they enroll. Relatively few beneficiaries have purchased the Medicare Select and high- deductible plan options, which were created to increase the options available to beneficiaries. About 9 percent of beneficiaries enrolled in standardized Medigap plans had a Medicare Select plan in 1999. With Medicare Select, beneficiaries buy 1 of the 10 standardized plans but are limited to choosing among hospitals and physicians in the plan’s network except in emergencies. In exchange for a limited choice of providers, premiums are typically lower, averaging $979 in 1999, or more than $200 less than the average Medigap premium for a standardized plan. Similarly, insurers report that few individuals choose the typically lower priced high- deductible option available for plans F and J. These options require beneficiaries to pay a $1,580 deductible before either plan covers any services. An NAIC official noted that these options may have relatively low enrollment because beneficiaries may prefer first-dollar coverage and no restrictions on providers. In addition, an insurance representative noted that administrative difficulties and higher costs associated with operating these plans have discouraged some insurers from actively marketing these products, which likely contributes to the low enrollment. Beneficiaries do not have access to Medicare Select plans in all states— NAIC reports that 15 states do not have insurers within the state selling Medicare Select plans. While consumers typically have access to all 10 standardized Medigap plans during their 6-month open-enrollment periods, the extent to which they can choose from among multiple insurers offering these plans varies depending on where they live. Insurance companies marketing Medigap policies must offer plan A and frequently offer plans B, C, and F, but are less likely to offer the other six plans, particularly those plans with prescription drug coverage. (See table 3.) Our review of state consumer guides and other information from states and insurers shows that during the 6-month open-enrollment period applicants typically have access to multiple Medigap insurers, with the most options available for plans A, B, C, and F. For example, in 19 states, every Medigap insurer offered plan F to these beneficiaries. In contrast, fewer insurers offer Medigap plans with prescription drug benefits. Although in most states several insurers offer these plans, state consumer guides indicate that only one insurer offers plan J in New York and plans H, I, and J in Rhode Island. In addition, no insurers market plan H in Delaware or plans F, G, or I in Vermont.Appendix II includes a summary of the number of insurers estimated to offer Medigap plans in each state. While beneficiaries in most states have access to multiple insurers for most Medigap plans, a few insurers represent most Medigap enrollment. In all but one state, United HealthCare Insurance Company or a Blue Cross/Blue Shield plan represents the largest Medigap insurer. Nationally, about 64 percent of Medigap policies in 1999 were sold by either United HealthCare or a Blue Cross/Blue Shield plan. United HealthCare offers all 10 Medigap policies to AARP members during their initial 6-month open- enrollment period in nearly all states and charges applicants in a geographic area the same premium regardless of their health status (a rating practice known as community rating). Outside of beneficiaries’ 6- month open-enrollment period, United HealthCare also offers applicants without end-stage renal disease (i.e., permanent kidney failure) plans A through G without medically underwriting—that is, screening beneficiaries for health status—in states where it sells these policies. In an effort to minimize adverse selection and to remain competitive, United HealthCare medically underwrites applicants for the three plans with prescription drug coverage who are outside their initial open-enrollment period. Medicare beneficiaries who are not in their open-enrollment period or do not otherwise qualify for one of the special enrollment periods, such as when an employer eliminates supplemental coverage or a Medicare+Choice plan stops serving an area, are not guaranteed access under federal law to any Medigap plans. Depending on their health, these individuals may find coverage alternatives to be reduced or more expensive. Outside of the initial or special open-enrollment periods, access to any Medigap plan could depend on the individual’s health, the insurer’s willingness to offer coverage, and states’ laws. Further, beneficiaries whose employer terminates their health coverage or whose Medicare+Choice plan withdraws from the program are only guaranteed access to Medigap plans A, B, C, and F, which do not offer prescription drug coverage. Medicare beneficiaries can change Medigap policies, but may be subject to insurers’ screening them for health conditions prior to allowing a change to a Medigap policy with more generous benefits outside open- enrollment/guaranteed-issue periods. If a person has a Medigap policy for at least 6 months and decides to switch plans, the new policy generally must cover all preexisting conditions. However, if the new policy has benefits not included in the first policy, the company may make a beneficiary wait 6 months before covering that benefit or, depending on the health condition of the applicant, may charge a higher premium or deny the requested change. According to an insurer representative, virtually all Medigap insurers will screen the health condition of applicants who want to switch to plans H, I, or J to avoid the potential for receiving a disproportionate share of applicants in poor health. Beneficiaries purchasing Medigap plans may still incur significant out-of- pocket costs for health care expenses in addition to their premiums. In 1999, the average annual Medigap premium was more than $1,300, although a number of factors, such as where a beneficiary lives and insurer rating practices, may contribute to significant variation in the premiums charged by insurers. Despite their supplemental coverage, Medicare beneficiaries with Medigap coverage paid more out-of-pocket for health care services (excluding long-term care facility care) than any other group of beneficiaries even though their self-reported health status was generally similar to other beneficiaries. Medigap plans can be relatively expensive, with an average annual premium of more than $1,300 in 1999. Premiums varied widely based on the level of coverage purchased. Among the 10 standardized plans, plan A, which provides the fewest benefits, was the least expensive with average premiums paid of nearly $900 per year. The most popular plans—C and F—had average premiums paid of about $1,200. (See table 4.) The plans with prescription drug coverage—H, I, and J—had average premiums paid more than $450 higher than those plans without such coverage ($1,602 compared to $1,144). In addition, Medigap policies are becoming more expensive. One recent study reports that premiums for the three Medigap plans offering prescription drug coverage have increased the most rapidly—by 17 to 34 percent from 1999 to 2000. Medigap plans without prescription drug coverage rose by 4 to 10 percent from 1999 to 2000. Additional factors, such as where Medicare beneficiaries live or specific demographic or behavioral characteristics, also may influence variation in Medigap premiums. For example, premiums often vary widely across states, which may in large part reflect geographic differences in use and costs of health care services as well as state policies that affect how insurers can set premium rates. Additionally, premiums for the same policy can vary significantly within the same state. The method used by insurers to determine premium rates can dramatically impact the price a beneficiary pays for coverage over the life of the policy. Finally, depending on the state or insurer, other factors such as smoking status and gender may also affect the premiums consumers are charged. Premiums vary widely among states. For example, based on data reported by the insurers to NAIC, average premiums per covered life for standardized Medigap plans in California were $1,600 in 1999—more than one-third higher than the national average of $1,185 and more than twice as high as Utah’s average of $706. (See app. III for average premiums per covered life for standardized plans by state.) This variation is also evident for specific plan types. For example, average annual premiums per covered life for plan J were $646 in New Hampshire and $2,802 in New York, and for plan C were $751 in New Jersey and $1,656 in California. In six states (i.e., Alabama, California, Florida, Illinois, Louisiana, and Texas), the average premium per covered life exceeded the national average for all 10 standard plan types while in six states (i.e., Hawaii, Montana, New Hampshire, New Jersey, Utah, and Vermont), the average premium per covered life always fell below the national average. Beneficiaries in the same state may also face widely varying premiums for a given plan type offered by different insurers. For example, our review of state consumer guides showed that in Texas, a 65-year-old consumer could pay an annual premium from $300 to as much as $1,683 for plan A, depending on the insurer. Similarly, in Ohio, plan F annual premiums for a 65-year-old ranged from $996 to $1,944 and in Illinois, plan J premiums ranged from $2,247 to $3,502. Table 5 provides premium ranges for a 65- year-old in five states with large Medicare populations. Some of the variation seen in table 5 is attributable to differences in the premium rating methodology used by different insurers. Insurers who “community rate” policies charge all applicants the same premium, regardless of their age. Those who use an “issue-age-rated” methodology base premiums on an applicant’s age when first purchasing the policy, and although the premium can increase for inflation, any such increase should not be attributable to the aging of the individual. In addition to increases for inflation, an “attained-age-rated” policy’s premium also increases as a beneficiary ages. Although attained-age policies are generally cheaper when initially purchased, they may become more expensive than issue-age rated policies at older ages. For example, a Pennsylvania insurer that sells both attained-age and issue-age policies for plan C charges a 65-year-old male $869 for an attained-age policy or $1,347 for an issue-age policy. But over time, under the attained-age rated policy, this individual would have premium increases both due to inflation and the higher cost the insurer anticipates as the policyholder ages. However, under the issue-age-rated policy, rate increases would only reflect inflation because the higher anticipated costs resulting from aging have already been included in the premium rate. By age 80, excluding increases attributable to inflation, the attained-age policy would cost $1,580 but the issue-age policy would remain at $1,347. Individuals who did not anticipate premium increases over time for attained-age policies may find it increasingly difficult to afford continued coverage and consequently may let their Medigap coverage lapse. Or, as their premiums increase and if an individual is still in good health, individuals may switch to plans sold by insurers that charge the same premium regardless of age, thus creating the potential for these insurers to have a disproportionate share of older beneficiaries. For individuals not in their open-enrollment period or otherwise eligible for a guaranteed-issue product, insurers may also adjust premium prices based on the health status of individuals. Because the consumer guides show that insurers offer the same Medigap plan type for a wide range in premiums, some plans with higher premiums are unlikely to have high enrollment. Nonetheless, insurers may have an incentive to continue offering higher cost plans despite low enrollment because states prohibit insurers that stop marketing a plan type from reentering the market and selling that particular plan for a 5-year period. Insurers that may not want to completely exit a market may continue to offer a plan type with a premium higher than the market rate, thereby discouraging enrollment but ensuring their continued presence in the market. However, federal law requiring Medigap plans to pay at least 65 percent of premiums earned for beneficiaries’ medical expenses for individually purchased policies limits insurers’ ability to charge rates excessively higher than the market rates. Despite purchasing Medigap policies to help cover Medicare cost-sharing requirements and other costs for health care services that the beneficiary would have to pay directly out of pocket, Medigap purchasers still pay higher out-of-pocket costs than do other Medicare beneficiaries. Our analysis of the 1998 MCBS showed that out-of-pocket costs for health care services, excluding long-term facility care costs, averaged $1,392 for those purchasing individual Medigap policies with prescription drug coverage and $1,369 for those purchasing individual Medigap policies without prescription drug coverage—significantly higher than the $1,056 average for all Medicare beneficiaries. (See fig. 2.) Furthermore, Medigap purchasers had higher total expenditures for health care services ($7,631, not including the cost of the insurance) than Medicare beneficiaries without supplemental coverage from any source ($4,716) in 1998. These higher expenditures for individuals with Medigap may be due in large part to higher utilization rates for individuals with supplemental coverage. In addition, Medigap’s supplemental coverage of prescription drugs is less comprehensive than typically provided through employer-sponsored supplemental coverage and therefore may leave beneficiaries with higher out-of-pocket costs. Differences in health status do not appear to account for higher out-of-pocket costs and expenditures for those with Medigap. We found that Medicare beneficiaries with Medigap coverage reported a health status similar to those without supplemental coverage. Supplemental coverage can offset the effects of cost-sharing requirements intended to encourage prudent use of services and thus control costs. Providing “first-dollar coverage” by eliminating beneficiaries’ major cost requirements for health care services, including deductibles and coinsurance for physicians and hospitals, in the absence of other utilization control methods can result in increased utilization of discretionary services and higher total expenditures. One study found that Medicare beneficiaries with Medigap insurance had 28 percent more outpatient visits and inpatient hospital days relative to beneficiaries who did not have supplemental insurance, but were otherwise similar in terms of age, gender, income, education, and health status. Service use among beneficiaries with employer-sponsored supplemental insurance (which often reduces, but does not eliminate, cost sharing and is typically managed through other utilization control methods) was approximately 17 percent higher than the service use of beneficiaries with Medicare fee-for- service coverage only. Medigap covers some health care expenses for policyholders but also leaves substantial out-of-pocket costs in some areas, particularly for prescription drugs. Our analysis of the 1998 MCBS shows that Medigap paid about 13 percent of the $7,631 in average total health care expenditures (including Medicare payments) for beneficiaries with Medigap. Even with Medigap, beneficiaries still paid about 18 percent of their total costs directly out of pocket, with prescription drugs being the largest out-of-pocket cost. (See table 6.) Among Medigap policyholders with prescription drug coverage, Medigap covered 27 percent ($239) of prescription drug costs, leaving the beneficiary to incur 61 percent ($548) of the costs out of pocket. For Medigap policyholders without drug coverage, beneficiaries incurred 82 percent ($618) of prescription drug costs. Out-of-pocket costs for prescription drugs were higher for Medigap policyholders than any other group of Medicare beneficiaries, including those with employer-sponsored supplemental coverage ($301). Higher out-of-pockets costs for prescription drugs may be attributable to differences in supplemental coverage. Medigap policyholders with prescription drug coverage have high cost-sharing requirements (a $250 deductible and 50-percent coinsurance with a maximum annual benefit of $1,250 or $3,000 depending on the plan selected) in contrast to most employer-sponsored supplemental plans that provide relatively comprehensive prescription drug coverage. Employer-sponsored supplemental plans typically require small copayments of $8 to $20 or coinsurance of 20 to 25 percent, and provide incentives for enrollees to use selected, less costly drugs, such as generic brands or those for which the plan has negotiated a discount. Further, few employer-sponsored health plans have separate deductibles or maximum annual benefits for prescription drugs. As Congress continues to examine potential changes to the Medicare program, it is important to consider the role that Medigap supplemental coverage has on beneficiaries’ use of services and expenditures. Medicare beneficiaries who purchase Medigap plans have coverage for essentially all major Medicare cost-sharing requirements, including coinsurance and deductibles. But offering this “first-dollar” coverage may undermine incentives for prudent use of Medicare services, especially with regard to discretionary services, which could ultimately increase costs for beneficiaries and the entire Medicare program. While the lack of coverage for outpatient prescription drugs through Medicare has led to various proposals to expand Medicare benefits, relatively few beneficiaries purchase standardized Medigap plans offering these benefits. Low enrollment in these plans may be due to fewer plans being marketed with these benefits, their relatively high cost, and the limited nature of their prescription drug benefit, which still requires beneficiaries to pay more than half of their prescription drug costs while receiving a maximum of $3,000 in benefits. As a result, Medigap beneficiaries with prescription drug coverage continue to incur substantial out-of-pocket costs for prescription drugs and other health care services. We did not seek agency comments on this report because it does not focus on agency activities. However, we shared a draft of this report with experts in Medigap insurance at CMS and NAIC for their technical review. We incorporated their technical comments as appropriate. We will send copies of this report to the Administrator of CMS and other interested congressional committees and members and agency officials. We will also make copies available to others on request. Please call me at (202) 512-7118 or John Dicken, Assistant Director, at (202) 512-7043 if you have any questions. Rashmi Agarwal, Susan Anthony, and Carmen-Rivera Lowitt also made major contributions to this report. To assess issues related to Medigap plan enrollment and premiums incurred by beneficiaries who purchase Medigap plans, we analyzed data collected on the National Association of Insurance Commisioners’ (NAIC) 1999 Medicare Supplement Insurance Experience exhibit. We also analyzed the 1998 Medicare Current Beneficiary Survey (MCBS) to examine out-of-pocket costs paid by Medicare beneficiaries with Medigap policies. To assess the availability of Medigap plans across states and to individuals who are not in their open-enrollment periods we examined consumer guides for Medicare beneficiaries published by many states and by the Health Care Financing Administration (HCFA). (Appendix II further discusses our review of these consumer guides and the number of insurers offering standardized Medigap plans.) Additionally, we interviewed researchers and representatives from insurers, HCFA, NAIC, and several state insurance regulators. We conducted our work from March 2001 through July 2001 in accordance with generally accepted government auditing standards. We relied on data collected on the NAIC’s 1999 Medicare Supplement Insurance Experience Exhibit for information on Medigap enrollment by plan type and premiums per covered life by plan type and across states. Under federal and state statutes, insurers selling Medigap plans annually file reports, known as the Medicare Supplement Insurance Experience Exhibit, with the NAIC. NAIC then distributes the exhibit information to the states. These exhibits are used as preliminary indicators, in conjunction with other information, as to whether insurers meet federal requirements that at least a minimum percentage of premiums earned are spent on beneficiaries’ medical expenses, referred to as loss ratios. Additionally, insurers report information on various aspects of Medigap plans including plan type, premiums earned, the number of covered lives, as well as other plan characteristic information and a contact for the insurer. We relied on NAIC data containing filings as of December 31, 1999, for the 50 states and the District of Columbia. These data represent policies in force as of 1999, including pre-standardized policies, standardized policies, and policies for individuals living in three states in which insurers are exempt from the federal standardized policies (i.e., Massachusetts, Minnesota, and Wisconsin). An initial analysis of the 1999 data set revealed that several insurers failed to include or did not designate a valid plan type on their filings. As part of our data cleaning, we reclassified some of these filings to include or correct the plan type based on information reported in other sections of the insurance exhibit. We also called 37 insurers that covered more than 5,000 lives and had not included a valid plan type on their filing. During these calls, we asked for plan type information as well as verified whether the insurer sold a Medicare Select plan that included incentives for beneficiaries to use a network of health care providers, and corrected the data in the database. After the data-cleaning process, approximately 8 percent of the 10.7 million covered lives still had an unknown plan typeand less than 1 percent had missing information about whether the plan was sold as a Medicare Select policy. NAIC does not formally audit the data that insurers report, but it does conduct quality checks before making the data publicly available. We did not test the accuracy of the data beyond the data-cleaning steps mentioned above. During our phone calls to insurers, we found that some insurers failed to report separate filings for the various Medigap plan types they sell and instead reported aggregate information across multiple plan types. Since plan type information was unavailable for these plans, information for these insurers was excluded from our estimates of enrollment and premium estimates for standardized plans. We relied on HCFA’s 1998 MCBS for information on expenditures for health care services by payer for Medicare beneficiaries. Specifically, we examined (1) the out-of-pocket costs incurred by beneficiaries with a Medigap plan in comparison to other beneficiaries and (2) the out-of- pocket costs for beneficiaries with a Medigap plan as a share of total expenditures for health care services, including payments by Medicare and other payers. The MCBS is a multipurpose survey of a representative sample of the Medicare population. The 1998 MCBS collected information on a sample of 13,024 beneficiaries, representing about a 72-percent response rate. Because the MCBS is based on a sample, any estimates derived from the survey are subject to sampling errors. A sampling error indicates how closely the results from a particular sample would be reproduced if a complete count of the population were taken with the same measurement methods. To minimize the chances of citing differences that could be attributable to sampling errors, we highlight only those differences that are statistically significant at the 95-percent confidence level. We analyzed the MCBS’ cost-and-use file representing persons enrolled in Medicare as of January 1, 1997, and 1998. The cost-and-use file contains a combination of survey-reported data from the MCBS and Medicare claims and other data from HCFA administrative files. The survey also collects information on services not covered by Medicare, including prescription drugs and long-term facility care. HCFA notes that there may be some underreporting of services and costs by beneficiaries. To compensate in part for survey respondents who may not know how much an event of care costs or how the event was paid for, HCFA used Medicare administrative data to adjust or supplement survey responses for some information, including cost information. We did not verify the accuracy of the information in the computerized file. Because some Medicare beneficiaries may have supplemental coverage from several sources, we prioritized the source of insurance individuals reported to avoid double counting. That is, if individuals reported having coverage during 1998 from two or more kinds of supplemental coverage, we assigned them to one type to estimate enrollment and costs without including the same individuals in multiple categories. We initially separated beneficiaries enrolled in a health maintenance organization (HMO) contracting with the Medicare program (a Medicare HMO) from beneficiaries in the traditional fee-for-service Medicare program. Then, we used the following hierarchy of supplemental insurance categories: (1) employer-sponsored, (2) individually purchased (that is, a Medigap policy) with prescription drug coverage, (3) individually purchased without prescription drug coverage, (4) private HMO, (5) Medicaid, and (6) other public health plans (including coverage through the Department of Veterans Affairs and state-sponsored drug plans). Finally, those without any supplemental coverage were categorized as having Medicare fee-for- service only. For example, a beneficiary with Medicare HMO coverage sponsored by an employer would be included within the Medicare HMO category. Table 7 shows the number and percent of beneficiaries in each insurance category. Table 8 shows the extent to which health insurers offer the 10 standardized Medigap policies to 65-year-olds during the initial open- enrollment period. The table lists information for 47 states and the District of Columbia where insurers sell these plans. Three states—Massachusetts, Minnesota, and Wisconsin—are not included in the table because insurers in these states are exempt from federal Medigap standardized requirements. To determine the extent to which Medigap standardized plans are available in each state, we primarily relied on state consumer guides and information available from the Health Care Financing Administration’s (HCFA) web site. For states that did not have available information in consumer guides or Internet sites, we obtained information from their state insurance departments and insurers. We also contacted state insurance departments and insurers to verify state consumer guide information for states reporting three or fewer insurers offering any plan type to ensure that we did not understate the availability of Medigap plans in these states. Information from consumer guides and HCFA data may not contain comprehensive data on insurers operating in a state at a given point in time because (1) in some states, insurers voluntarily submit data to insurance departments and do not always report on the Medigap policies they offer and (2) data may not reflect recent changes such as companies that stop selling a product or new insurers that the states certify to sell Medigap plans. Also, in some states, such as Michigan, some insurers may be licensed to sell Medigap in the state but are not actively marketing the plan to new enrollees. We did not independently confirm information reported by state insurance departments and insurers. Table 9 presents information from the National Association of Insurance Commissioners’ (NAIC) 1999 Medicare Supplement Insurance Experience Exhibit on premiums per covered life for standardized Medigap plans among the states and the District of Columbia offering the federally standardized Medigap plans. Nationally, the average premium per covered life in 1999 for the standardized plans was $1,185, and ranged from $706 in Utah to $1,600 in California. Medicare: Cost-Sharing Policies Problematic for Beneficiaries and Program (GAO-01-713T, May 9, 2001) Retiree Health Benefits: Employer-Sponsored Benefits May Be Vulnerable to Further Erosion (GAO-01-374, May 1, 2001) Medicare+Choice: Plan Withdrawals Indicate Difficulty of Providing Choice While Achieving Savings (GAO/HEHS-00-183, Sept. 7, 2000) Medigap: Premiums for Standardized Plans That Cover Prescription Drugs (GAO/HEHS-00-70R, Mar. 1, 2000) Prescription Drugs: Increasing Medicare Beneficiary Access and Related Implications (GAO/T-HEHS/AIMD-00-100, Feb. 16, 2000) Medigap Insurance: Compliance With Federal Standards Has Increased (GAO/HEHS-98-66, Mar. 6, 1998) Medigap Insurance: Alternatives for Medicare Beneficiaries to Avoid Medical Underwriting (GAO/HEHS-96-180, Sept. 10, 1996)
To protect themselves against large out-of-pocket expenses and help fill gaps in Medicare coverage, most beneficiaries buy supplemental insurance, known as Medigap; contribute to employer-sponsored health benefits to supplement Medicare coverage; or enroll in private Medicare+Choice plans rather than traditional fee-for-service Medicare. Because Medicare+Choice plans are not available everywhere and many employers do not offer retiree health benefits, Medigap is sometimes the only supplemental insurance option available to seniors. Medicare beneficiaries who buy Medigap plans have coverage for essentially all major Medicare cost-sharing requirements, including coinsurance and deductibles. But this "first-dollar" coverage may undermine incentives for prudent use of Medicare services, which could ultimately boost costs for the Medicare program. Although various proposals have been made to add a prescription drug benefit to Medicare, relatively few beneficiaries buy standardized Medigap plans with this benefit. Low enrollment in these plans may be due to the fact that fewer plans are being marketed with these benefits; their relatively high cost; and the limited nature of their prescription drug benefit, which still requires beneficiaries to pay more than half of their prescription drug costs. Most plans have a $3,000 cap on benefits. As a result, Medigap beneficiaries with prescription drug coverage continue to incur substantial out-of-pocket expenses for prescription drugs and other health care services.
In 1996, the federal government spent $1.4 trillion in U.S. states and territories to procure products and services, to fund grants and other assistance, to pay salaries and wages to federal employees, to provide public assistance, and to fund federal retirement programs and Social Security, among other things. Some states rank relatively high on the per capita distribution of different types of federal dollars. Government reports indicate that in 1996, Maryland, Virginia, and Alaska were the only three states to rank among the top five in each of the following categories: (1) total federal expenditures, (2) total federal procurement expenditures, and (3) total salary and wage expenditures for federal workers. The only other state that ranked among the top 10 states in all these categories was New Mexico. Interest in the economic magnitude of defense and other federal expenditures in states has been amplified by concerns over anticipated outcomes of the post-Cold War drawdown. In hearings before the Joint Economic Committee of the 101st Congress, 12 state governors submitted to the leadership of the Senate and House a plan for responding to expected adverse economic impacts in states that were believed to be particularly vulnerable to reductions in defense spending. In 1992, President Bush issued Executive Order 12788, requiring the Secretary of Defense to identify the problems of states, regions, and other areas that result from base closures and Department of Defense (DOD) contract-related adjustments. The Office of Economic Adjustment is DOD’s primary office responsible for providing assistance to communities, regions, and states “adversely impacted by significant Defense program changes.” The federal government tracks defense-related and other federal spending and associated employment through various sources. Centralized reporting of this information is done by the Census Bureau in its Consolidated Federal Funds Report (CFFR) series. The CFFR includes the Federal Expenditures by State (FES) report and a separate two-report volume that presents information at the county and subcounty level. The FES report presents the most comprehensive information on federal expenditures at the state level that can actually be attributed to specific federal agencies or programs. Agencies involved in collecting and reporting various types of employment information include the Office of Personnel Management (OPM) and the Bureau of Labor Statistics. Expenditure information reported in the CFFR also appears in agency-specific publications or data sources. DOD reports information on its total procurement expenditures and the salaries and wages paid to DOD personnel, by state, in the Atlas/Data Abstract for the United States and Selected Areas. In compiling information for the CFFR, DOD’s procurement data are first sent to the Federal Procurement Data System (FPDS) and then sent to Census. Therefore, Census, DOD, and the FPDS can and do report DOD procurement expenditures. Federal expenditure and employment data are available to users in and outside the government and are regularly used in policy formulation and evaluation. DOD contractors, including the Logistics Management Institute, have used federal government data in support of their work for DOD on the economic impacts of base realignment and closure actions. The Office of Economic Conversion Information, a collaborative effort between the Economic Development Administration of the Department of Commerce and DOD, uses existing federal data to provide information to communities, businesses, and individuals adjusting to the effects of defense downsizing and other changing economic conditions. The Congressional Budget Office and the Congressional Research Service have also used DOD procurement expenditure data in examining the expected effects of planned reductions in the national defense budget. DOD uses its prime contract award expenditure data to track the status and progress of goals associated with contracts made to small businesses. Researchers at think tanks, universities, and state government offices also use government data in a wide array of research projects and publications. DOE and DOD military activities have contributed substantially to the economy of New Mexico for about 50 years. Government data show that between 1988 and 1996, New Mexico was ranked second, third, or fourth, among U.S. states in per capita distribution of federal dollars. In terms of per capita federal procurement expenditures only, New Mexico was ranked first among U.S. states during 1988-94 and second in 1995-96. In 1996, New Mexico was ranked first among states in return on federal tax dollars, receiving $1.93 in federal outlays for every $1.00 in federal taxes paid. The state was also ranked first in return on federal tax dollars in 1995. In 1996, 5 of the 6 major federal facilities were among the top 10 employers in the state. This federal revenue comes largely from the six major federal facilities in New Mexico, including two DOE national laboratories, Los Alamos National Laboratory and Sandia National Laboratory; Cannon, Holloman, and Kirtland Air Force Bases; and White Sands Missile Range, a test range that supports missile development and test programs for all the services, the National Aeronautics and Space Administration (NASA); and other government agencies and private industry. New Mexico’s geography and climate, including relative isolation from major population centers, year-round good weather, and open airspace, have made the state attractive for some military activities. In May 1996, the Secretary of Defense and the German Defense Minister activated the German Air Force Tactical Training Center at Holloman Air Force Base in Alamogordo. The training opportunities provided by the vast airspace in and around Holloman and its proximity to Fort Bliss, Texas—the headquarters location for German air force operations in North America—were factors in Germany’s decision to invest in a tactical training center at the base. State officials estimate that the training center will result in a population increase to the Alamogordo area of about 7 percent and investment by Germany of $155 million by 1999. Services and trade are distinct components of New Mexico’s economy. In 1993, the largest employment sectors in New Mexico were services, government, and trade: these were reported as accounting for approximately 76 percent of the total average annual state employment.Businesses involved in trade and/or services accounted for 67 percent of all businesses in New Mexico in 1993. Revenue from the gross receipts tax is the highest source of tax revenue in New Mexico, and in 1996, gross receipt taxes from services and trade accounted for more than half of all gross receipts tax revenue. DOE reports show that between 1990 and 1995, it made more expenditures in the services and trade sectors of the New Mexico economy. New Mexico Department of Labor projections indicate that by 2005, the services sector will alone account for about 41 percent of total employment while employment in the trade sector is projected to remain stable and government employment is expected to decline. The projections indicate that jobs in services and trade will account for 70 percent of the new jobs between 1993 and 2005. New Mexico state officials have been focusing on “achieving economic diversification to protect against dramatic negative changes in the state’s economy,” believed to be linked to changes in federal spending in the state. Efforts in 1996 to recruit select industries to the state have initially resulted in at least 7 businesses locating to New Mexico, creating 230 new jobs. In terms of other efforts, New Mexico was 8th among U.S. states in high-technology employment growth between 1990 and 1995. The single leading high-technology industry in the state is semiconductor manufacturing, which accounts for 34 percent of total high-technology jobs. Intel Corporation has three advanced computer chip manufacturing sites that employ at least 6,500 people making it the state’s second-largest private sector employer and contributing to the growth in New Mexico’s high-technology employment. In 1995, Intel was also the leading manufacturing employer in the state. High-technology exports account for the largest percentage of New Mexico exports to other countries, with exports to Korea leading other nations. Currently, about 10 percent of all New Mexico manufacturers are exporting. The leading exporters in New Mexico are Intel, Motorola, and Honeywell Defense Avionics. A comparison of the percent change in New Mexico’s per capita income and total defense-related spending (DOE and DOD) in the state during 1990-94 shows that real growth occurred in per capita income, while total defense expenditures declined (see fig.1). A comparison between percent real growth in New Mexico’s gross state product and total defense-related federal expenditures reveals the same pattern, suggesting that efforts to diversify the state’s economy may be having a positive effect (see fig. 2). Based on the average rate of growth in the gross state product during 1987-94, the Bureau of Economic Analysis identified New Mexico as the third-fastest-growing state. Available federal data provides a segmented and rough snapshot of federal money spent in states and the employment linked to those expenditures that is relevant to gauging some trends and patterns. For example, government data indicates that in 1996, the federal government spent about $12 billion in New Mexico. Direct expenditures for procurement, salaries and wages for federal workers, and grants accounted for 60 percent, or about $7.3 billion, of the total. Direct payments to individuals, the single largest category of federal expenditures, accounted for approximately 37 percent, or about $4.4 billion, of total 1996 federal expenditures (see fig. 3). Appendix II includes additional descriptions of federal spending and employment in New Mexico. The top five agencies making procurement expenditures in New Mexico during 1993-96, were DOE, DOD, the Department of Interior, NASA, and the Postal Service. The defense-related agencies (DOE and DOD), compared to the nondefense-related ones, accounted for 90 percent, or $14.1 billion, of the $15.5 billion total spent during 1993-96. Specifically, DOE accounted for 80 percent of the total federal defense-related procurement expenditures, or about $11.2 billion of the 1993-96 total of $14.1 billion. Between 1993 and 1996, the top five federal agencies that accounted for the largest dollar amount of expenditures to pay salaries and wages of federal workers in New Mexico were DOD; the Postal Service; and the Departments of Interior, Health and Human Services, and Veterans Affairs. Salaries and wages paid to federal employees of the defense-related agencies account for about $7 billion, or 54 percent, of the total $13 billion spent in New Mexico. Specifically, between 1988 and 1996 DOD accounted for about $6.5 billion, or 93 percent, of the $7 billion total defense-related federal salaries and wages. Payments to workers retired from defense-related agencies also accounted for more of the total annuities to retired federal workers living in New Mexico during 1990-96. Payments to retired defense-related federal workers accounted for $3.2 billion, or 68 percent, of the total $4.7 billion in annuitant expenditures. Payments to former DOD workers accounted for 98 percent of the total payments to retired defense-related workers. Figure 4 shows the percent of defense-related expenditures for procurement, federal workers’ salary and wages, and retirement payments accounted for by DOE and DOD, respectively. Between 1988 and 1996, the Departments of Defense, the Interior, Health and Human Services, Veterans Affairs, and Agriculture were the top five agencies in terms of total federal employees in New Mexico. Between 1988-1996, defense-related jobs were about 72 percent, or 300,000 jobs, of the total 420,000 federal jobs in New Mexico. Specifically, DOD accounted for 97 percent, or about 292,000 of these jobs, over the period 1988-96. Thus, DOD federal jobs were more of the total federal jobs and more of the defense-related federal jobs in New Mexico. Federal retirees of defense-related agencies also comprised more of the retired federal workers living in New Mexico: 68 percent of the total between 1990 and 1996. Specifically, DOD accounted for 99 percent of all retirees from the defense-related agencies. Figure 5 shows the percent of defense-related jobs and retirees in New Mexico accounted for by DOE and DOD. The existing data provides information on federal employees only. This is an important point because although the overall ratio of DOD federal workers to DOE federal workers was 44:1 between 1988 and 1996, our research also shows that more of the DOE employment is linked to private contractors that manage and operate the laboratories and other DOE facilities than to the number of DOE federal employees. Private contractors working on government contracts are not considered or counted as federal employees. However, even when we compared the total DOE employment, which included direct DOE prime contractor, subcontractor, and federal employees, to the total DOD federal employment DOD’s direct federal employment was higher than DOE’s in each year between 1990 and 1996. Of the DOD employment, more of the federal jobs were DOD military than DOD civilians. Between 1988-96 about 42 percent of the total DOD federal jobs in New Mexico were held by active duty military members, 33 percent were held by inactive duty military (national guard and reserves), and 25 percent were held by DOD civilians. Similarly, more of the federal wages were associated with active duty military. Active duty military members accounted for 55 percent, inactive members accounted for 5 percent, and DOD civilians accounted for 40 percent of the total salaries and wages between 1988-96. A comparison of the occupations represented by the defense-related federal jobs in New Mexico indicates that during 1988-96 the largest number of jobs were blue-collar and technical. This finding, however, largely represents the patterns for the DOD active duty employment in New Mexico, for which technical and blue-collar jobs comprise about 70 percent of the total jobs. Among DOD civilian employees, the two categories that accounted for the largest number of jobs over the period 1988-96 were professional (23 percent of the total jobs) and blue-collar (20 percent of the total jobs). The two occupational categories that account for more of the DOE direct federal employment in New Mexico are administrative (30 percent of total jobs) and professional (37 percent of total jobs). Official federal data sources are useful for gaining a preliminary understanding of the composition of federal expenditures in states. However, fundamental characteristics of the federal data make it difficult to determine the direct economic impact of federal activities on states. For example, our analysis of defense-related expenditures and employment did not include information on DOD contractor employment because there is no official DOD or other federal source of such information. Federal government data sources provide insufficient evidence for determining where federal dollars are actually spent, how much is actually spent, and the number or type of jobs that the federal dollars directly generate because of numerous limitations in scope and coverage and in reporting requirements or procedures. Our related findings that pertain to the data sources used and reviewed in our work are summarized in tables 1 and 2. To gain further insights into the reliability of the federal government’s data we focused on characteristics of existing DOD data. Although DOD’s procurement expenditure data (DD350) is used in broad policy contexts and used to evaluate the status of programs that are believed to be important to economic security, the form is not designed to provide information on all DOD expenditures in a single state or at the national level. Procurement contracts under $25,000 are not included, no information on DOD subcontracts of any value are included, and financial data related to classified programs may or may not be reported or be accurate. DOD acknowledges that the DD350 does not completely account for all procurement expenditures, and although this limitation is generally understood and acknowledged by informed users, the possible implications are not. We surveyed the top five DOD contractors in New Mexico to determine how much money they received in DOD prime contracts and subcontracts and compared their responses to DOD’s records (the DD350 data) of their total contracts. The comparisons revealed that in no case were the DOD records of the dollar value of contracts awarded to these companies the same as the contractors’ records. Differences between DOD and contractors’ records ranged from $20 million for prime contracts to $80 million for total contracts. In some cases, the DOD records appeared to overstate the amount the contractors received, while in other cases the DOD records appeared to understate the amount. Our research suggests several possible reasons for the inconsistencies between contractor records and DOD records. For example, expenditures associated with procurement contracts can leak from a state’s economy if a company subcontracts part of the work elsewhere. One study reported that of $5.2 billion in DOD prime contracts received by McDonnell Douglas in St. Louis, Missouri, less than 3 percent, or $156 million, stayed in Missouri due to out-of-state subcontracting. However, from our survey of contractors in New Mexico we determined that leakages were more prevalent for certain types of procurement contracts. While our survey showed overall that more than 80 percent of the total DOD prime contract dollars remained in the state, for every year between 1988 and 1996, it also showed that the businesses that predominantly received service contracts, rather than supply and equipment contracts (i.e., major hard goods/weapons), kept nearly all of the DOD contract money they received in the state. This is particularly relevant because other DOD data indicate that in every year between 1988 and 1996, DOD procurement contracts for services account for the largest dollar volume of contracts to New Mexico. Also, service contracts may more likely be under DOD’s $25,000 reporting threshold and therefore excluded from total expenditures as officially reported by DOD. Furthermore, injections of dollars from subcontracts with out-of-state firms or with other in-state firms are not tracked by DOD, yet would have been included in the contractors’ records. Finally, the DOD Inspector General reported in 1989 that the DD350 data had reliability problems due to instances of unreported contract obligations and other errors in reported data. The Inspector General made no recommendations and has not assessed the reliability and validity of the DD350 contract tracking system since then. The existing data that track defense-related employment are limited in their scope, coverage, and reliability. Among the most notable limitation in the data is the lack of a central or official source of data on private-sector employment associated with DOD contracts. Information on the number of jobs associated with particular defense contracts or weapon programs are repeatedly discussed in the media and in Congress. Further, DOD has stated that defense procurement dollars promote the creation of jobs. However, DOD officials have also indicated that they do not collect information on the job impacts of particular DOD budget decisions. To obtain information on the employment associated with defense contracts or the employment linked to particular defense programs, it is necessary to contact individual defense contractors and/or DOD system program offices directly. The contractor employment data we obtained from our survey of defense contractors in New Mexico is summarized in appendix III, along with other survey findings. The responses from the top four contractors who provided us data indicated that the total number of direct jobs associated with DOD contracts was approximately 19,200 during 1988-96. The total DOD federal employment (active duty, inactive, and civilians) in the state for the same period (1989 data included) was approximately 328,000. A comparison of employment data from three top DOE prime contractors to the data from the top four DOD prime contractors indicates that, over the period 1994-96, DOE had about eight prime contractor employees to every one DOD prime contractor employee in New Mexico. We also obtained employment and expenditure data for a sample of specific defense programs that were known to have some involvement with New Mexico contractors (see table 3). The available data indicate that the state of New Mexico receives relatively large amounts of federal dollars. Defense-related federal activities in the state have contributed to the development of the economy, and recent efforts to diversify the economic base appear linked to continued growth. The best available data indicate that in New Mexico DOE and DOD account for about 90 percent of all federal procurement spending (1993-96), 54 percent of expenditures for federal worker salary and wages (1988-96), 72 percent of all federal jobs in the state (1988-96), and 68 percent of all retired federal workers living in the state (1990-96). Specifically, DOE accounts for 80 percent of the defense-related procurement expenditures, and DOD accounts for 93 percent of the defense-related salary and wage expenditures, 97 percent of the defense-related federal jobs, and 99 percent of the federal workers retired from defense-related agencies and living in New Mexico. The largest component of DOE employment is private contractor employment, while the largest component of DOD employment is federal employment, namely active duty military members. On one hand, determining the full and complete economic magnitude of federal expenditures in states, whether defense or nondefense, and the related employment is not possible with existing data. Trying to reconcile differences among data sources and account for gaps or questionable data is very resource-intensive and does not necessarily yield benefits in precision or accuracy. On the other hand, the existing data are not without value, nor should the government necessarily strive for increased data collection that could actually entail more costs than benefits. The limitations in federal data may, in part, reflect the fact that data collection trails behind changes in federal policy or shifts in policy relevance. Those who rely on federal data need to be alert to their drawbacks and exercise discretion when using them. In oral comments on a draft of this report, DOD concurred with our findings and conclusions. It also provided several technical comments, which we incorporated in the text where appropriate. In conducting our work, we contacted and interviewed officials and experts from federal and state government offices and the private sector. Because the scope of the work covered all federal expenditures and related employment in New Mexico over an 8-year period, there was a large range and number of contacts and outreach efforts we made in completing our work. We made over 50 contacts throughout federal and state governments and the private sector. Our final results were produced from databases from four separate federal agencies; our survey of New Mexico defense contractors encompassing 8 years of financial and business information; information obtained from a review of more than 30 publications; and information we obtained from numerous documented interviews with key officials. A list of the offices we contacted is in appendix I. To determine the characteristics of the New Mexico economy and recent changes in the economy, we reviewed and analyzed economic data and information we obtained from interviews with New Mexico state officials, federal government officials, and available federal and state data sources, including the Bureau of Economic Analysis and the Bureau of Business and Economic Research at the University of New Mexico. To determine the direct defense-related and nondefense-related federal expenditures and employment in New Mexico over the period 1988-1996we contacted multiple federal offices and obtained official data from DOD and DOE. We obtained data on all other nondefense-related federal expenditures from the Census Bureau. All available data on DOD and DOE expenditures were categorized as defense-related. We obtained total nondefense-related employment data from OPM’s Central Personnel Data File. All expenditure figures were adjusted for inflation and are presented in constant 1996 dollars. Appendix II contains the complete overview and figures depicting our findings related to direct federal expenditures and employment in New Mexico. To determine the extent to which available government data provides reliable information on defense spending and employment, we evaluated the qualities of the existing federal data. We reviewed technical documentation for the sources used, interviewed agency officials about the data sources, conducted crosschecks of data that appeared in multiple sources but had been derived from the same source, and in the case of DOD procurement expenditures, compared the results of DOD data to our survey results. Survey results are discussed in appendix III. Given the outcome of our review, federal data limitations and data reliability concerns are discussed in our findings and reflected in the report’s conclusions. Our work was conducted between November 1996 and October 1997 in accordance with generally accepted government standards. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 14 days from its issue date. At that time, we will send copies of this report to other interested congressional committees and members. Copies will also be made available to others upon request. Please contact me at (202) 512-3092 if you or your staff have any questions concerning this report. Major contributors to this report were Carolyn Copper, John Oppenheim, and David Bernet. Professional Aerospace Contractors Association of New Mexico, Albuquerque, New Mexico Intel Corporation, Albuquerque, New Mexico American Electronics Association, Santa Clara, California Logistics Management Institute, McClean, Virginia Academy for State and Local Governments, Washington, D.C. National Council of State Governments, Washington, D.C. National Legislative Council, Washington, D.C. National Governors Association,Washington, D.C. RAND, Washington, D.C. This appendix presents 1988-96 (1) trends in total direct federal expenditures and employment in New Mexico and within specific spending categories, (2) defense-related and nondefense-related expenditures and employment, and (3) the Department of Energy’s (DOE) and the Department of Defense’s (DOD) share of the defense-related expenditures and employment. We used existing databases and a survey on how much money is directly spent and how many people are directly employed to determine expenditures and employment. We did not assess the indirect or induced effects of federal expenditures and employment. All expenditure data were adjusted for inflation and are presented in constant 1996 dollars. Data for all years were not always available. Federal expenditures in New Mexico fluctuated between about $10 billion and $12 billion, 1988 through 1996. The highest level of spending occurred in 1996 (see fig. II.1). Figure II.1: Federal Expenditures in New Mexico (1988-96) This increase in federal expenditures for New Mexico is consistent with nationwide trends. Total federal employment in New Mexico generally increased between 1988 and 1994, then declined to 1996. Total employment in 1996 is the lowest level of any year in the period (see fig. II.2). The decline in federal employment in New Mexico in the last several years is consistent with trends in declining nationwide federal employment. Figure II.2: New Mexico Federal Employment (1988-96) Figure II.3 shows the specific expenditure trends in procurement, grants, salaries and wages for federal workers, and direct payments to individuals. Figure II.3: Total Federal Spending on Procurement, Grants, Federal Employee Salaries and Wages, and Direct Payments in New Mexico (1988-96) Procurement expenditures in New Mexico have generally declined over time but did increase between 1989 and 1992. In the 1988-96 time frame, procurement expenditures were at their lowest in 1996. Expenditures on grants and direct payments have increased over time and have not shown periods of decline. This is consistent with national trends. Federal salary and wage trends are marked by small increases over time with periods of stability following an increase. Defense-related procurement expenditures far exceeded nondefense-related procurement expenditures during 1993-96. But both types of expenditures have been declining (see fig. II.4). The decline in defense-related expenditures is consistent with overall trends in declining DOD and DOE budgets. Figure II.4: Defense-Related and Nondefense-Related Federal Procurement Expenditures in New Mexico (1993-96) Nondefense-related agencies accounted for more of the expenditures for federal grants to New Mexico (see fig. II.5). The top five agencies in terms of expenditures on federal grants to New Mexico were the Departments of Health and Human Services (HHS), Transportation, Interior, Agriculture, and Education. Expenditures on nondefense-related grants were 99 percent of the total grant expenditures in each year between 1988 and 1996. Figure II.5: Defense-Related and Nondefense-Related Federal Grant Expenditures in New Mexico (1988-96) Defense-related agencies accounted for more of the total salaries and wages for federal workers than nondefense-related agencies between 1988 and 1996 (see fig. II.6). Figure II.6: Salaries and Wages to Defense-Related and Nondefense-Related Federal Workers in New Mexico (1988-96) Between 1988 and 1993 total expenditures on salaries and wages for nondefense-related workers increased steadily, slowly declining in the last 4 years. On the other hand, salary and wage expenditures for defense-related workers generally declined between 1988 and 1993 but increased slightly between 1995 and 1996. Salaries and wages were at their highest in 1996 for defense-related workers were and at their highest in 1993 for nondefense-related federal workers. It is not possible to make clear federal agency distinctions in direct payment expenditures. These expenditures are commonly reported by federal program, not by federal agency. Given the reporting criterion used, we determined which federal program accounted for most of the direct payments in New Mexico. In some but not all cases, this information is sufficient to determine which federal agency accounted for most of the expenditures. Programs administered by HHS accounted for over 50 percent of the total direct payment expenditures in New Mexico in each year between 1988 and 1996: the average was 63 percent (see fig. II.7). The programs included in the HHS roll-up include Social Security, Medicare, and Supplemental Security Income. Figure II.7: Distribution of Federal Direct Payments in New Mexico, by Federal Program (1988-96) Payments for federal retirement and disability made up the second largest category of direct payments in New Mexico in each year between 1988 and 1996. On average, these payments accounted for 18 percent of all direct payments made in New Mexico during 1988-96. The Food Stamp Program, administered by the Department of Agriculture, on average, accounted for 5 percent, and direct payments to individuals associated with all other programs, on average, accounted for 14 percent of the total direct payments over the same time period. We could not determine the breakdown between the defense-related and nondefense-related distribution of federal retirement payments directly from the Census data. Therefore, we obtained additional data from DOD and the Office of Personnel Management (OPM). Figure II.8 shows that payments to workers retired from the defense-related agencies account for the majority—on average 68 percent—of the total annuities for retired federal workers in New Mexico, between 1988 and 1996. Total annuities for defense and nondefense-related retired federal workers have increased over time. Figure II.8: Total Annuities for Federal Workers Living in New Mexico and Retired From Defense-Related and Nondefense-Related Agencies (1988-96) Federal workers from the defense-related agencies accounted for the majority of the total federal employment in New Mexico during 1988-96 (see fig. II.9). Federal jobs in the defense-related agencies, on average, accounted for 72 percent of the total federal jobs in New Mexico. Total federal employment declined by approximately 4,000 jobs between 1992 and 1996; about 84 percent of these jobs were in defense-related agencies. Figure II.9: Defense-Related and Nondefense-Related Federal Employment in New Mexico (1988-96) Defense-related agencies in New Mexico account for about 68 percent of the federal retirees, on average, between 1990 and 1996. The number of federal workers retired from defense and nondefense-related agencies and living in New Mexico has increased over time. Figure II.10: Federal Retired Workers From Defense and Nondefense-Related Agencies Living in New Mexico (1988-96) The defense-related agencies in New Mexico accounted for the majority of procurement expenditures, total annuities for retired federal workers, and salaries and wages for federal employees. In figures II.11, II.12, and II.14, we show the trends in the DOD and DOE share of the expenditures in each of these categories. We also show the number of DOD and DOE federal retirees in New Mexico (see fig. II.13). Between 1993 and 1996, DOE accounted for more of the defense procurement dollars that went to New Mexico than DOD (see fig. II.11). Consistent with overall declining DOE and DOD budgets, DOE and DOD procurement expenditures in New Mexico have declined in the last several years. Figure II.11: DOD and DOE Procurement Expenditures in New Mexico (1993-96) Figure II.12 shows that payments to DOD retired federal workers living in New Mexico account for most of the total annuities to federal workers retired from defense-related agencies between 1990 and 1996. On average, annuities to retired DOD workers accounted for 98 percent of total annuities between 1990 and 1996. Figure II.12: Annuities to Workers Retired From DOD and DOE and Living in New Mexico (1990-96) Also, more former DOD than DOE federal employees were living in New Mexico between 1990 and 1996 (see fig. II.13). Figure II.13: DOD and DOE Retired Federal Workers in New Mexico (1990-96) The increase in retired DOD workers in New Mexico is consistent with an overall increase in the number of retired active duty military members and DOD civilians. Figure II.14 shows that DOD also accounts for nearly all of the salary and wage expenditures for federal employees of defense-related agencies. Figure II.14: DOD and DOE Federal Employee Salary and Wage Expenditures in New Mexico (1988-96) On average, DOD accounted for 93 percent of the defense-related salaries and wages for federal employees. The total amount of DOD and DOE salary and wage expenditures has fluctuated some over the years, but no sharp increases or decreases have occurred. DOE mostly employs prime contractor employees, who are not counted as federal employees, thus, their numbers are not included in federal data. DOE data we obtained indicates that the salaries and wages for DOE prime contractor employees in New Mexico are greater than those of DOD federal employees in the state. For example, between 1990 and 1994 the total salaries and wages for DOD federal employees were about $4 billion and, for DOE prime contractors were about $6 billion. Comparable figures on the total compensation to DOD prime contractor employees in New Mexico were not available. However, the data we obtained from our survey of the top New Mexico contractors shows that the total compensation to their employees was $332 million between 1990 and 1994, or about $6.6 million per year. Defense-related federal employment in New Mexico is higher than nondefense-related employment. In this section, we show the DOD and DOE portions of defense-related employment over time, including DOD’s and DOE’s numbers and types of occupations. On average, DOD accounted for 97 percent of the total defense-related federal employment in New Mexico between 1988 and 1996 (see fig. II.15). Figure II.15: DOE and DOD Employment in New Mexico (1988-96) DOE In each year between 1988 and 1996, active duty military members were the single largest group of DOD federal employees in New Mexico. Inactive duty military and DOD civilian employees, respectively, accounted for the second and third largest component of DOD federal employment (see fig. II.16). Figure II.16: DOD Active, Inactive, and Civilian Employment in New Mexico (1988-96) Active duty and inactive duty military members, and DOD civilians ranked first, third, and second, respectively, in accounting for the largest share of salary and wages for DOD federal employees in New Mexico from 1988 to 1996 (see fig. II.17). Figure II.17: Salary and Wages for DOD Active and Inactive Duty Members and DOD Civilians in New Mexico (1988-96) Between 1988 and 1996 more of the DOD active duty military jobs in New Mexico were blue collar and technical compared to administrative, clerical, white collar, or professional job occupations (see fig. II.18). Figure II.18: Job Occupations of DOD Active Duty Military in New Mexico (1988-96) The job occupations of DOD civilians were more evenly dispersed across categories than DOD military jobs. Professional job occupations accounted for the most DOD civilian jobs in New Mexico between 1988 and 1996 (see fig. II.19). Figure II.19: Job Occupations of DOD Civilians in New Mexico (1988-96) The majority of DOE federal jobs in New Mexico between 1988 and 1996 were professional and administrative (see fig. II.20). Figure II.20: Job Occupations of DOE Federal Employees in New Mexico (1988-96) The principal purpose of our survey was to determine and characterize the flow of defense dollars to contractors and to illuminate and quantify the limitations of existing data sources that document defense spending in states. For our survey sample, we selected contractors who were among the top five in terms of the total dollar amount of DOD prime contracts awarded in fiscal year 1996. Time and resource constraints prevented us from surveying every business that was awarded a defense contract and performed work in New Mexico. For example, in 1996 alone, 471 businesses were awarded DOD contracts exceeding $25,000 for work principally done in New Mexico. We obtained DOD’s DD350 data to determine the total value of DOD prime contracts awarded to all businesses in 1996 with the principal place of work in New Mexico. From this population we selected five contractors: Honeywell, DynCorp, EG&G, Kit Pack Company, and Lockheed Martin. In 1996, prime contracts to these businesses accounted for 26 percent of the total value of all DOD prime contracts awarded to businesses in New Mexico. In the period covered by our survey, that is, 1988-96, the percentage of total DOD prime contract awards accounted for by the top five New Mexico contractors ranged from 26 to 46 percent. Different companies have been in the list of the top five over the years. However, over the survey period, Honeywell and DynCorp were consistently among the top five. Contractors were asked to complete several questions about DOD contracts they were awarded as a prime and subcontractor between 1988-96. We asked them to indicate the total value of all DOD contracts received, the dollar amount of contract work that was subcontracted or was interdivisional work, the amounts subcontracted in-state and out-of-state, the amount of salary and wages for all contracts completed by the contractor and by subcontractors, and the number of full-time equivalent (FTE) positions for work completed by the contractor and for subcontractors. As a group Honeywell, Lockheed Martin, DynCorp, and EG&G are large, diversified corporations with business establishments physically located in New Mexico but actual corporate headquarters located elsewhere in the country. Kit Pack is a relatively smaller company, with its business headquarters and all operations located in New Mexico. During the period of time covered by our survey, Honeywell’s principal DOD work in New Mexico was research, development, and testing and evaluation services for military aircraft and the manufacturing of aircraft avionics components. In 1996, DOD awarded prime contracts to Honeywell to provide automatic pilot mechanisms; flight instruments; and research, development, and testing and evaluation services related to aircraft engine manufacturing, among other things. Its survey data was completed by staff at Honeywell’s business establishment in Albuquerque. DynCorp is a large professional and technical services firm. DynCorp’s principal work in New Mexico is providing business services, which include aircraft maintenance and repair at military bases, and operations services provided at government-owned facilities. In 1996, DOD awarded prime contracts to DynCorp to provide maintenance and repair services to equipment and laboratory instruments, telecommunications services, and other services associated with operating a government-owned facility at White Sands Missile Range, among other things. DynCorp’s survey data was completed by staff at the corporate headquarters in Reston, Virginia. DynCorp’s responses were based on financial data for DynCorp and its subsidiaries that also operate in New Mexico (e.g., Aerotherm). EG&G’s principal DOD work in New Mexico is providing communications equipment; operating radar and navigation facilities at Holloman Air Force Base; and doing advanced research, development, testing and evaluation work. In 1996, DOD awarded prime contracts to EG&G to provide advanced development and exploratory research and development (including medical) services at Kirtland Air Force Base and to operate radar and navigation facilities at Holloman Air Force Base, among other things. EG&G’s survey data was completed by staff at the Albuquerque office and includes data only for EG&G Management Systems. Kit Pack Company is located in Las Cruces, south of Holloman Air Force Base near White Sands Missile Range. Kit Pack’s principal DOD work in New Mexico is providing aircraft spare parts and modification kits. In 1996, DOD awarded prime contracts to Kit Pack to provide aircraft hydraulics, vacuum and deicing system components, airframe structural components, and torque converters and speed changers, among other things. After it completed and returned the survey to us, Kit Pack officials informed us that it was currently operating under Chapter 11 bankruptcy due to the termination for default of an Army contract. Kit Pack had filed an appeal of the termination, which was pending when we completed our work. The company indicated that it has seen a severe reduction in the number of DOD contracts awarded since it filed for bankruptcy. Kit Pack staff in Las Cruces completed our survey. We were unable to obtain survey information from Lockheed Martin. Company officials indicated that they did not have the type of information we requested broken out by states or geographical locations. In a follow-up meeting, company officials provided us with information on their total expenditures to New Mexico suppliers, annual payroll for their employees in New Mexico and the number of employees in the state between 1992 and 1996. The information was developed by staff in Lockheed Martin’s Washington operations office. We could not use Lockheed Martin’s information because it was not broken out by specific federal agencies, nor could we determine whether the total expenditures, payroll, or employment were associated with government-funded work or whether they were part of the company’s commercial business. Over the course of several meetings and conversations with Lockheed Martin officials, we obtained detailed supplier expenditure information from the Lockheed Martin Consolidated Procurement Program which was broken out by specific Lockheed Martin business units. Company officials said that this would provide an indication of the type of business activity (e.g., DOD, DOE, NASA, and commercial) that the expenditures were made for. In addition, we were given information on corporate sales and payroll by staff in Lockheed Martin’s tax department. We discovered several discrepancies in the company’s financial information. When we discussed these with company officials, they indicated that the data provided by the Washington operations office were “less reliable” than other data. Company officials also indicated that their record-keeping had been challenged by the recent merger/acquisition activities (i.e., Lockheed and Martin Marietta in 1995 and the Loral acquisition in 1997). Lockheed Martin officials said that different companies had different information systems and that some information may have been lost during the recent merger. Our survey was not designed to specify or measure the exact amount of all DOD contract dollars that flow into New Mexico. Rather, its purpose was to reflect the nature of the flow of DOD prime and subcontract dollars to a sample of top New Mexico contractors and to compare these results to existing DOD data. Among the four contractors that completed the survey, none indicated that they could not provide reliable responses to the survey items. The most common limitation was the lack of information on FTEs and wages for subcontracted work. Specifically, contractors indicated the following limitations in their responses to us. Honeywell provided information on the dollar amount of the orders it received during the calendar year and estimates of subcontracted work and employees and wages associated with subcontracted work. Kit Pack did not have FTE or wage information on its subcontractors and indicated that it no longer had payroll records for its own staff for 1988, 1989, or 1991. EG&G did not have records for FTEs and wages associated with subcontracted work. DynCorp did not have information on its subcontractors prior to 1993. To report fiscal year information, DynCorp had to convert some company financial data that was not identified by fiscal years. We treated all survey data received from contractors as proprietary. Therefore, in discussing survey findings, contractor names are not used and data is aggregated to protect business-sensitive information. All dollars were adjusted for inflation and are constant 1996 dollars. All of the contractors surveyed were DOD prime contractors. Two of the four contractors we surveyed indicated that they were also DOD subcontractors. The total amount of DOD prime and contract subcontract awards has declined over the 9-year period. The totals reported for 1996 were the lowest of all the years. For the 9-year period of our survey, expenditures for DOD prime contracts ($1.5 billion) were roughly the same as for subcontracts ($1.4 billion). However, in 5 of the 9 years, the contractors received more subcontract than prime contract dollars (see fig. III.1). Figure III.1: DOD Contracts Awarded to the Top Four New Mexico Defense Contractors (1988-96) Between 1988 and 1996, the percent of prime contract dollars that remained in-state was consistently greater than 80 percent (see fig. III.2). The 9-year average was 83 percent. Figure III.2: Contract Dollars Received by the Top Four New Mexico Defense Contractors That Stayed In-State (1988-96) Although the average percent of prime contract dollars that remained in New Mexico was high, examination of specific contractor data indicates important exceptions. For two of the contractors, the survey results indicated that nearly 100 percent of the prime contract dollars they received remained in-state between 1988 and 1996. However, one contractor’s data shows that less than 50 percent of prime contract dollars received remained in-state each year between 1988 and 1996. Approximately 70 percent of the total prime contract awards received by another contractor remained in-state for all years (see fig. III.3). Figure III.3: Differences in Percent of Prime Contract Dollars That Remained In-State (1988-96) For the two contractors that were also DOD subcontractors, a slightly smaller percentage of their subcontract dollars remained in-state compared to the percentage of their prime contract dollars (see fig. III.4). On average, 75 percent of subcontract dollars remained in-state between 1988 and 1996. Figure III.4: Subcontract Dollars That Stayed In-State (1988-96) The contractors indicated that the majority of jobs supported by their DOD prime contracts remained in-state. On average, 73 percent of the jobs remained in-state during 1988-96. The lowest yearly percentage was 66 percent in 1989 and 1990, and the highest was 83 percent in 1996 (see fig. III.5). Figure III.5: DOD Prime Contract and Subcontract Jobs That Stayed In-State (1988-96) On average, 73 percent of the total wages for employees working on DOD prime contracts and subcontracts remained in-state between 1988 and 1996 (see fig. III.6). From 1988 to 1996 the percent of wages that remained in-state generally increased. Figure III.6: Wages for DOD Prime Contract and Subcontract Work That Stayed In-State (1988-96) We compared our survey results to DOD’s records of the total amount of contract awards received by the contractors between 1994 and 1996. DOD sources collect and report information only on prime contracts while our survey collected information on DOD prime contracts and subcontracts. Thus, we expected that DOD’s records and the contractors’ would be different as was revealed in the survey. Therefore, we compare DOD’s records of total prime contracts to our survey results on the amount of prime contracts received by the contractors in New Mexico and that remained in the state. However, to shed further light on and quantify, where possible, the limitations in existing DOD data, we also compared the amount of total contracts, defined as in-state prime contracts and subcontracts, to the DOD totals, defined as prime contracts (see fig. III.7). The overall comparison between the contractors’ records and DOD’s records of total prime contract amounts shows that DOD records can both overstate and understate the total amount of prime contracts that actually end up in a state’s economy. In 1994, the contractors’ records show that $93.6 million in DOD prime contract work was done in New Mexico. On the other hand, DOD’s records indicate that the contractors received $144.9 million in prime contracts, representing a possible $51 million, or about a 54-percent overstatement. However, in 1995, the contractors’ records showed that $143.3 million in DOD prime contract work was done in the state, whereas DOD’s records show that the businesses received $117.2 million, representing a possible $26-million, or about an 18 percent understatement. As expected, a comparison of the contractors’ records of the total contracts (in-state prime contracts and in-state subcontracts) to the existing DOD records of total prime contracts shows that the totals reported by the contractors were consistently greater than the totals reported in DOD’s records.
Pursuant to a congressional request, GAO examined defense and other federal spending in the state of New Mexico, focusing on: (1) characteristics of New Mexico's economy and changes in it; (2) the amount of direct defense-related and nondefense-related federal spending in the state and the direct federal employment associated with both, over time; and (3) the extent to which available government data can provide reliable information on defense spending and employment. GAO noted that: (1) New Mexico is home to two Department of Energy (DOE) national laboratories and four Department of Defense (DOD) military installations, among other federal activities; (2) state officials indicate that New Mexico's economy is "heavily dependent" upon federal expenditures; (3) in 1996, New Mexico was fourth among states in the per capita distribution of federal dollars and first in return on federal tax dollars; (4) while parts of the state have relatively strong economies, in 1994 New Mexico's poverty rate was the second highest in the country and its per capita income was 48th in the country; (5) although defense-related spending has been declining, New Mexico's gross state product and total per capita income have been increasing, indicating that the economy is growing and that efforts to diversify the economy may be having a positive effect; (6) one can learn several things from the available federal government expenditure and employment data for New Mexico; (7) DOD and DOE expenditures have consistently represented the largest share of all federal expenditures for procurement and salaries and wages in New Mexico; (8) defense-related employment has also consistently represented the largest share of total federal employment in New Mexico, including retired federal workers; (9) DOD and DOE do not contribute equally on types of defense-related spending or defense-related employment, revealing relevant distinctions between the types of direct economic contributions made by these agencies; (10) DOE contributes most in federal procurement expenditures and private contractor employment; (11) DOD contributes most in federal salaries and wages and federal employment, namely active duty military and retired employees; (12) existing government data, however, contributes to only a partial understanding of the type of federal dollars that enter a state's economy and the employment supported by the expenditures; (13) GAO's research based on New Mexico shows that the data have limitations that severely restrict the ability to determine the total amount and distribution of federal funding and jobs in the state; (14) key limitations include: (a) reporting thresholds that exclude millions in procurement expenditures; (b) the reporting of the value of an obligation, rather than the money actually spent; (c) the absence of any comprehensive source of primary data that systematically identifies private sector employment associated with federal contracts; and (d) DOD's lack of data on subcontracts; and (15) since these data sources are not unique to New Mexico, these limitations would also apply to assessments of other states.
Most IMF borrower countries have reduced important barriers to trade over the past decade. Although progress has varied among countries and over time, generally tariff and nontariff barriers have fallen. Despite this progress, many policies remain that restrict free and open trade, and some IMF borrowers still maintain very high restraints. However, borrowers’ restrictiveness levels are similar to those of nonborrowers, and about two- thirds are WTO members. Only a few of the 98 IMF borrowers trade enough to have much ability to significantly affect any individual sectors of the U.S. economy. We analyzed the import barriers of IMF borrower countries using several available measures of restrictiveness, including average tariff rates;nontariff barriers; and indexes constructed by the IMF, the Heritage Foundation, and the Fraser Institute. Although these indicators do not comprehensively measure all the policies that countries may use to restrict trade, they do reflect important barriers and provide information on the relative restrictiveness of countries among one another and over time. Overall, we found that these measures demonstrated growing trade liberalization. The IMF conducted a study of 27 countries’ trade policies during 1990-96, using its own restrictiveness measures. The study found that during this period the number of countries labeled “restrictive” fell from 63 to 41 percent, while the number of “open” countries rose from 11 to 33 percent. Taking the same 27 countries and reviewing their progress through 1998, we found that the number of restrictive countries further fell to 7 percent, and the number of open countries rose to 48 percent. Other indicators also confirmed this liberalization trend across the full group of 98 IMF borrowers. Despite the progress made in reducing trade barriers, many restraints remain that inhibit imports into IMF borrower countries. According to the IMF’s measure, about one-half of the 98 current borrowers maintain moderate (38 percent of borrowers) or restrictive (14 percent of borrowers) barriers. The Heritage Foundation and Fraser Institute indicators also show a range of restrictiveness, although the Heritage Foundation’s measure reported less openness than either the IMF or Fraser Institute indicator, placing over one-half of borrowers in its most restrictive groupings. The tariff data we reviewed showed that average tariffs for borrowers ranged from as low as 0.1 percent to over 40 percent, but the majority fell between 7 percent and 24 percent. In comparison, the United States, the EU, and Japan maintain average tariffs of approximately 3 to 7 percent. Thirty of the 98 borrowers are listed in a March 1999 U.S. government report that identifies the most significant foreign trade barriers that affect U.S. exports. Most of the 30 countries listed were cited for having inadequate intellectual property protection or for maintaining restrictive import policies, such as setting investment barriers and creating barriers to foreign participation in government procurement. Our analysis shows that the 98 current IMF borrowers restrict trade to about the same extent as the 78 IMF member countries that do not owe funds to the IMF. As figure 1 shows, the IMF trade measure rates 48 percent of borrowers as open, compared with 53 percent of nonborrowers; 38 percent as moderate, compared with 33 percent of nonborrowers; and 14 percent as restrictive, compared with 14 percent of nonborrowers. Also, lesser economically developed borrowers and nonborrowers alike tended to have higher levels of restrictiveness. However, we did find that borrowers and nonborrowers tend to use different types of policies to restrict trade. Borrowers generally use higher tariff barriers, while nonborrowers tend to use higher nontariff barriers such as import quotas. Of the 98 IMF borrowers, about two-thirds are WTO members. WTO membership commits them to following WTO disciplines on their trade policies, providing some degree of market access, and complying with WTO dispute settlement procedures. Many IMF borrowers have also undertaken additional WTO liberalization commitments, as well as made commitments under bilateral agreements with the United States on investment and other matters. For example, 37 IMF borrowers have signed the WTO agreement on basic telecommunications services, and 51 have reached bilateral accords with the United States on such matters as investment and intellectual property. Despite greater integration into the world trading system and growing trade, many borrower countries have been involved in trade disputes with the United States. One-fifth (17) of the 98 borrowers have been subject to formal market access complaints under the WTO’s dispute settlement procedures. Only a few of the 98 IMF borrowers are large enough traders to significantly affect any particular sectors of the U.S. economy. Eight borrowers accounted for 21 percent of U.S. trade in 1998, while the other 90 borrowers accounted for 5 percent. As figure 2 shows, of these eight countries, Mexico traded the most with the United States in 1998, accounting for about 11 percent of U.S. trade; followed by Korea with 3 percent; Brazil with 2 percent; and the Philippines, Thailand, Venezuela, India, and Indonesia, with about 1 percent each. One of the other 90 borrowers could significantly affect U.S. companies or workers in certain product sectors, however, if it comprised a large share of U.S. trade of a particular product. For example, flat-rolled iron and nonalloy steel imports from Russia account for approximately 26 percent of U.S. imports of that product. The eight largest U.S. trade partners generally maintain moderate barriers to trade. According to the tariff and other information we analyzed, most have average tariffs between 10 percent and 20 percent and are rated by various indicators as having significant nontariff barriers. For example, Thailand’s average tariff rate in 1998 was 18 percent, Brazil’s was 15 percent, and Indonesia’s was 10 percent. Exceptions include Korea, which in 1998 had an average tariff rate of 8 percent; and India, with a 23 percent average rate. Mexico’s average tariff rate is about 13 percent for all countries outside of the North American Free Trade Agreement (NAFTA), but its average tariff rate on U.S. products is about 2 percent due to NAFTA. All eight of these U.S. trade partners are members of the WTO, and most have bilateral trade agreements with the United States. We evaluated the import barriers and export policies of four of the eight IMF borrowers that accounted for 21 percent of U.S. trade in 1998: Brazil, Indonesia, Korea, and Thailand. These countries accounted for about 7 percent of U.S. trade in 1998. Financial crises in Brazil, Indonesia, Korea, and Thailand have substantially affected their trade with the United States, even as the U.S. government has remained concerned about various trade policies in the four countries. The four countries have experienced either rising trade surpluses or falling trade deficits with the United States since their financial crises began, due primarily to a large decline in U.S. exports to them. Even before their crises began, however, the U.S. government had been concerned about a number of these countries’ trade policies. Prior to the crises, much of the executive branch’s attention had been focused on import policies that affected U.S. exports to the four countries, especially in Korea. Import policies of concern in the four countries have included Korean barriers to imports and distribution of beef, automobiles, and distilled spirits, government procurement procedures in airport construction, and import clearance procedures; restrictions on automobile imports in Brazil and Thailand; and inadequate protection of intellectual property rights, especially in Indonesia. Export policies that the executive branch has been concerned about include Korean government support to its steel and semiconductor industries, and Indonesian government subsidies to its automobile industry. The United States continues to press these and other trade issues even as it places priority on restoring the overall health of crisis countries for their own and the U.S.’ benefit. Any analysis of import barriers and export policies in Brazil, Indonesia, Korea, and Thailand must acknowledge the effects those countries’ recent financial crises have had on their economies and trade. The crises that began in 1997 dramatically reduced incomes and demand for domestic as well as imported goods. The value of these nations’ currencies declined, with each of the countries’ currencies depreciating by 30-50 percent or more relative to the U.S. dollar in real (inflation-adjusted) terms. The depreciations reduced the purchasing power of local currencies, making it hard for these countries to buy U.S. exports. The depreciations also made the affected nation’s exports more competitive on world markets. World prices for key commodities fell, particularly for oil, agricultural goods, and electronic products. Outflows of foreign capital and domestic credit crunches reduced output and stalled commerce, with direct implications for trade accounts. Even without policy changes, such macroeconomic disturbances have a major influence on overall trade levels and balances. Since their crises erupted in 1997, Indonesia and Thailand have widened their trade surpluses with other countries, Korea’s trade balance went from a deficit to a surplus, and Brazil’s deficit has fallen. Most of the shift was caused by a decline in these nations’ imports from abroad, rather than by increases in their exports to other countries. Even though the volume of their exports rose at a double-digit rate, the dollar value of exports from these nations was actually lower in 1998 than it was in 1997 because dollar prices for many of their goods were falling dramatically. The United States, meanwhile, has seen a worsening of its trade deficit with all countries worldwide, not only in absolute terms but also relative to the size of its economy. From 1997 to 1998, the U.S. trade surplus with Brazil fell; for Korea, a U.S. surplus changed to a deficit; and for Indonesia and Thailand, U.S. deficits grew larger. According to a March 1999 USTR report, U.S. government trade policy in 1999 remains centered on assuring recovery in the nations in financial crisis. Stabilization and growth are necessary before customers in Brazil, Indonesia, Korea, and Thailand can resume buying U.S. exports at levels at or above those in the past. Healthy economies will also absorb more of the output of local producers, easing pressures on U.S. firms competing with these nations’ suppliers. Economists also suggest that the U.S. economy will suffer more if crisis countries are unable to export as they recover. For example, a 1998 Brookings Institution paper that analyzed the impact of the Asian financial crisis on trade and capital flows reached this conclusion. In essence, a downward spiral of falling production, consumption, and imports would ensue, hurting both these four countries and the United States. At the same time, U.S. efforts to address trade policies of concern continue. Items being actively pursued with Brazil, Indonesia, Korea, and Thailand include long-standing import market access and export subsidy issues, and the need to improve protection of intellectual property rights. Since the crisis unfolded, two additional types of issues have been added to the U.S. agenda: (1) ensuring that the countries do not reverse the liberalization accomplished in prior years; and (2) more vigorously addressing governmental and industry practices that the U.S. government and industry believe may have contributed to the crisis, such as directed credit and other privileges for industries deemed by these nations’ governments to be important for economic development. The U.S. government has focused considerable attention in the last 3 years on eliminating or modifying certain import policies in Brazil, Indonesia, Korea, and Thailand that had restricted U.S. exports to those countries. The United States has invoked WTO dispute settlement procedures over some of these policies and has signed bilateral trade agreements to try to resolve other policies. The United States has had more concerns about Korea’s import policies than about the other three countries in our review. The United States has invoked WTO dispute settlement procedures against Korean policies concerning beef, distilled spirits, airport procurement procedures, and import clearance procedures that have delayed or impeded the entry of U.S. products into Korea. Other Korean import policies that have been high priorities for the executive branch include restrictions on imports and distribution of pharmaceutical products, motor vehicles, agricultural and food products, and cosmetics. In Brazil, U.S. concerns have included policies that allegedly discriminated against U.S. automobile exports and that restrict the availability of import financing. In Indonesia, the main U.S. concern has been over protection of intellectual property rights. In Thailand, U.S. priorities have included high import duties on certain agricultural and food products, high automobile tariffs, inadequate protection of intellectual property rights, and inefficient customs operations. Appendix I contains more information on these and other U.S. priority import policies in Brazil, Indonesia, Korea, and Thailand. Since 1996, the United States has formally invoked WTO dispute settlement procedures over a number of Brazilian, Indonesian, and Korean subsidies and has found subsidies in Brazil, Korea, and Thailand to be countervailable under U.S. trade law; that is, that the subsidies both were being provided by their governments and were conferring a benefit to their companies under the meaning of those laws, or were specifically prohibited by WTO agreements. In addition, the U.S. government has been concerned about possible export policies, such as Korean government- directed lending and support to its steel industry and the Brazilian government’s auto sector policies. Korea is the largest economy of the four countries we reviewed and the world’s seventh largest exporter. Korea was the U.S.’ ninth largest export market in 1998, dropping from its position of fifth largest in 1997 due to its financial crisis. The United States ran a $7.4-billion merchandise trade deficit with Korea in 1998, compared to a $1.9 billion surplus in 1997. The trade deficit resulted from a 34 percent drop in U.S. merchandise exports to Korea, from $25.1 billion in 1997 to $16.5 billion in 1998, and a 3.4 percent increase in Korean merchandise exports to the United States, from $23.2 billion in 1997 to $23.9 billion in 1998. Major Korean exports to the United States in 1998 included machinery and transport equipment, steel, manufactured goods, and chemicals and related products. Over the last 30 years, Korea has pursued a strongly export-oriented economic development model with considerable government involvement. Under this model, the Korean government has worked closely with Korean financial institutions and large corporate conglomerates to promote exports in targeted sectors, such as heavy and chemical industries, consumer electronics, and automobiles. The overinvestment in certain sectors and excessive corporate debt that this development strategy eventually produced contributed to Korea’s recent financial crisis. Government assistance to exporters has consisted of providing a range of industry-specific subsidies, tax benefits, export financing, export marketing assistance, government-influenced lending, and research and development assistance. In recent years, the United States has been concerned over Korean subsidies and other export policies. Korean Subsidies and Internal Supports—U.S.-initiated WTO Disputes and Countervailing Duty Cases: In February 1999, the United States invoked WTO dispute settlement procedures against Korean beef industry policies. The United States alleged that Korean regulations discriminated against and constrained opportunities for the sale of imported beef in Korea and that Korea provided domestic support to its cattle industry in amounts that exceeded its WTO tariff reduction schedule. The United States and Korea engaged in formal consultations over this matter in mid-March, and a panel to consider the matter was formed on May 26, 1999. Also, within the last 5 years, the Commerce Department has determined that a number of Korean subsidies to its steel industry were countervailable under U.S. trade law. The three cases have involved stainless steel plate in coils; stainless steel sheet and strip in coils; and certain cut-to-length, carbon- quality steel plate. (App. II provides more details concerning U.S. countervailing duty law, WTO subsidies rules, and these specific cases.) U.S. Concerns About Other Korean Policies: In addition to policies that the U.S. government has formally raised in the WTO or found to be countervailable under U.S. trade law, the executive branch has been concerned about other Korean export and subsidy polices in the last 3 years. These policies have involved government-directed lending, government involvement in and support to the Korean steel industry, restructuring of corporate conglomerates (particularly in the automobile, steel, shipbuilding, and semiconductor industries), and semiconductors. Government-directed Lending: The Commerce Department has reported that it is monitoring whether the Korean government may be influencing commercial banks to lend funds at preferential rates to targeted industries—particularly to Korea’s steel and semiconductor industries. The U.S. government has raised this issue with Korean government and industry officials on numerous occasions. In addition, Korea’s IMF and World Bank programs contain reforms to Korea’s financial system and corporate sector that help to curtail the government’s ability to direct bank lending on noncommercial terms. As previously mentioned, Commerce has examined potential subsidies resulting from alleged government-directed lending to the Korean steel industry in three recent countervailing duty investigations of certain Korean steel products. Steel Industry: The U.S. government and U.S. steel industry have been concerned for some time about Korean government involvement in and support for its steel industry, such as below-market-interest-rate loans extended by government-owned banks to steel producers. Several actions have taken place in addition to the countervailing duty cases previously discussed. In June 1995, the U.S. Committee on Pipe and Tube Imports filed a Section 301 petition alleging that Korea restricted exports of domestically produced steel sheet, controlled domestic prices below world prices, and diverted exports of pipe and tube products from the EU to the U.S. market. The Committee withdrew its petition in July 1995 when Korea agreed to establish a consultative mechanism with the United States to provide information about Korea’s steel sheet, pipe, and tube production and exports. The Korean government also agreed to notify the United States of any measure to control steel production, pricing, or exports, and to not interfere in steel pricing or production. Although the consultative mechanism was extended for another year, and bilateral consultations were held in 1996 and 1997, the United States continued to raise concerns about Korean government influence over private-sector decisions concerning steel. In 1997 and 1998, for example, the United States asked the Korean government to respond to specific questions concerning Hanbo (Korea’s second largest steel producer), which collapsed financially and is now being sold. The United States was concerned that the Korean government may have provided subsidies to Hanbo and directed Korean banks to extend credit to the company—actions that may have contributed to prices that undercut competitors and displaced U.S. steel exports to Korea and other countries. As a result of a 30 percent surge in steel imports into the United States during the first 10 months of 1998 compared to the same period in 1997, of which about 6 percentage points came from Korea (Japan and Russia were other important suppliers), the United States initiated an extensive dialogue with the Korean government to ensure that its steel sector would operate on a market-driven basis rather than with Korean government help. In 1998, the Korean government provided written assurances that it would not support, or direct others to support, Hanbo and that the sale of the company would be market based and managed by a reputable international financial company. In addition, Hanbo temporarily shut down production at one of its plants that was of particular concern to the U.S. steel industry. The Korean government also announced its intention to privatize Korea’s largest and the world’s second largest steel producer, Pohang Iron and Steel Company (POSCO). Since December 1998, the Korean government has reduced its 33 percent stake in POSCO to 20.8 percent. The full privatization of POSCO would serve to remove the Korean government’s influence from the company’s pricing, production, and other business decisions. In addition to monitoring POSCO’s privatization, the U.S. government is continuing to monitor steel import trends and any potential Korean government support to other steel companies. In addition, the U.S. government believes that, if faithfully implemented, Korea’s financial and corporate restructuring efforts—particularly those involving bank oversight and lending limits—should help guarantee that Korea’s steel corporations operate on a market-oriented basis. Restructuring of Corporate Conglomerates: As part of Korea’s financial arrangements with the IMF, the Korean government is trying to restructure the five largest Korean industrial conglomerates, or “chaebol,” to make them more commercially oriented and to reduce their debt levels. These chaebol are swapping certain assets and subsidiaries, as part of the so- called “Big Deal.” The World Bank is taking the lead in assisting Korea with its corporate sector restructuring. The U.S. government has flagged corporate restructuring as a systemic change that could not only help the Korean economy regain and sustain its stability but also enhance market access. The U.S. government has submitted questions to the Korean government on the specifics of certain restructuring efforts, including in the semiconductor sector, and emphasized that as a whole the restructuring should (1) yield more efficient, market-driven Korean firms without uneconomic business lines that contribute to excess capacity; and (2) be carried out in a manner that is consistent with Korea’s international obligations, particularly under the WTO Agreement on Subsidies and Countervailing Measures. The Commerce Department has reported that it is monitoring whether the Korean government might provide certain subsidies—such as tax breaks or drastic debt relief—as incentives to the companies to participate in the restructuring. In addition to these practices, the U.S. government in 1998 reported that Korea uses various tax-related measures that benefit Korean exporters or foreign investors in Korea. These include tax reserves for export losses and overseas market development, exemptions or reductions in duties on imported capital equipment to be used in exports, reductions in duties for imported aircraft and vessel parts, tax concessions to encourage foreign investment, tax concessions for overseas business losses, tax exemptions for overseas business development, and tax credits for investment in facilities. The Commerce Department also reported on Korean subsidy practices that benefit specific industry sectors. These sectoral practices include incentives to sustain steel companies; tax exemptions or credits for firms in designated manufacturing industries (machinery, electronics, aviation, defense, fine chemicals, genetic engineering, new basic materials, and antipollution technologies); tax incentives for multinational corporations in computer software and telecommunications; expense deductions for firms in traditional industries; support to miners when mines are closed; incentives to the stone industry; and assistance to small and medium-sized enterprises. Brazil was the U.S.’ 11th largest export market in 1998. In 1998, the United States ran a $5-billion trade surplus with Brazil. Brazilian merchandise exports to the United States totaled about $10 billion that year and consisted primarily of machinery and other manufactured goods. The Brazilian government does not provide many direct subsidies to exporters; however, the United States has been concerned about several that it does provide. WTO Disputes and Countervailing Duty Cases: Since 1996, the United States has participated in WTO cases involving two Brazilian subsidies. The United States invoked WTO dispute settlement procedures and held consultations with Brazil regarding various aspects of its automotive regime in August 1996, including provisions in its WTO-notified subsidy program for automobiles. In March 1998, the United States and Brazil signed an agreement settling the dispute. (See app. I for more details on this case.) The other WTO dispute was brought by Canada and involved PROEX, a Brazilian government export financing program. The United States reserved its rights as a third party in the dispute. In April 1999, a WTO dispute resolution panel found that PROEX’s interest equalization program was a prohibited export subsidy and that, because Brazil did not meet the conditions that allow developing countries more time than developed countries to remove prohibited export subsidies, the program must be withdrawn immediately. In addition to these WTO cases, in the last 3 years the U.S. government has found one Brazilian subsidy to manufacturers of certain hot-rolled flat-rolled carbon-quality steel products to be countervailable. (See app. II for more information about the PROEX dispute and the steel case.) Other Brazilian Subsidies of U.S. Concern: The U.S. government has been concerned about other Brazilian export programs. These programs include tax and tariff exemptions for equipment and materials imported for the production of goods for export, excise and sales tax exemptions on exported products, and rebates on materials used in the manufacture of exported products. Exporters enjoy exemptions from withholding tax for remittances sent overseas for loan payments and marketing, as well as from the financial operations tax for deposit receipts on export products. Exporters are also eligible for a rebate on social contribution taxes paid on locally acquired production inputs. According to the Commerce Department, tariff concessions Brazil introduced under its auto regime in December 1995 raised questions about the regime’s consistency with the WTO’s Agreement on Subsidies and Countervailing Measures. In 1998, Indonesia was the seventh largest U.S. trading partner among IMF borrowers but accounted for less than 1 percent of U.S. imports and exports. In 1998, the United States ran a $7.6-billion merchandise trade deficit with Indonesia, an increase of $2.5 billion from 1997. The increase in the merchandise trade deficit was mainly the result of a fall in U.S. exports to Indonesia in 1998 of $2.2 billion. Indonesia is a significant U.S. trading partner in some sectors, such as in U.S. imports of wood and rubber products. Indonesia has notified the WTO that it maintains a small number of subsidies. In October 1996, the United States and the EU initiated WTO dispute settlement procedures against two Indonesian subsidies to its automotive industry. One subsidy granted import duty relief to certain automotive parts and accessories for use in assembling or manufacturing motor vehicles based on the percentage of local content in the finished vehicles. The other subsidy permitted an Indonesian firm that was designated as a “pioneer” company to import tariff-free finished automobiles designated as “national cars” and to sell the national cars luxury tax-free for 3 years.Indonesia eliminated the subsidy to the pioneer company in January 1998 as a commitment to the IMF and, based on a June 1998 WTO appellate body ruling, Indonesia has until July 1999 to eliminate the local content subsidy. In addition to these automotive industry subsidies, in March 1999 the U.S. Commerce Department found that the Bank of Indonesia’s rediscount export financing program was an export subsidy; however, Commerce did not find it to be countervailable due to its small size. (See app. II for more details.) exports as they cross the border, and a similar mechanism functions in the case of interstate trade in the United States for certain products. Sales tax rates are considerably higher in Brazil than they are in U.S. states, according to the IMF, and the burden that would be imposed on exporters in the absence of such a rebate mechanism could be considerable. promoting exports in global markets, encouraging investment, and establishing or expanding industrial development zones. These programs include subsidies in the form of credits and tax exemptions on certain exports, and reduced tariffs on raw materials for products intended for reexport. In the past, the U.S. government has found a number of Thai subsidies to be countervailable, although in some cases no countervailing duty order was issued because the ITC did not find material injury to the competing U.S. industry. The countervailable Thai subsidies have included export packing credits (short-term, preshipment export loans); tax and duty exemptions that allow exporting companies to import machinery and equipment free of import duties and business and local taxes; import duty exemptions for raw materials that allow companies to import raw and “essential “ materials used in the production, mixing, and assembly of exports, free of import duties; and assistance for trading companies, which provides certain incentives to eligible trading companies. (See app. II for more details.) In addition to programs found to be countervailable, the U.S. government has identified several other Thai government export programs that are of potential concern. These programs include subsidized credit on some government-to-government sales of Thai rice, which benefit certain processed agricultural products and manufactured goods. Countries in an IMF financing arrangement sometimes have liberalized their trade systems within the context of their arrangements, although in many cases the liberalization has not been a condition of receiving disbursements of IMF funds. As part of their recent arrangements, Brazil, Indonesia, and Korea have liberalized their trade regimes to some degree. Brazil has modified one subsidy program and pledged not to introduce any new trade restrictions that hinder regional integration or are inconsistent with the WTO. Indonesia has reduced or eliminated some import tariffs and export restrictions and has committed to phase out most remaining nontariff import barriers and export restrictions by the year 2000. Korea has eliminated four subsidies and plans to make the operation of its subsidy programs more transparent. Korea is also making several changes to its import certification procedures. Thailand’s IMF program has no direct trade policy commitments. One reason for this, according to the U.S. Treasury, is that Thailand had fewer distorting trade policies than the other three countries. Although Brazil, Indonesia, and Korea are undertaking some trade reform, their IMF financing arrangements focus primarily on macroeconomic and other structural reforms rather than trade reform. According to the Treasury and the IMF, restrictive trade policies were not major causes of the countries’ financial crises. Further, while several of the trade policies to be eliminated or modified under the three countries’ IMF programs have been of concern to the United States and other countries, the stated purpose of these measures is not to assist the four countries’ trading partners but instead it is to make their economies operate more efficiently. That said, measures taken in an effort to restore economic stability should also contribute to market opening. In addition, as part of their IMF programs, Indonesia, Korea, and Thailand plan to further open their economies to foreign investment and to substantially restructure their financial and corporate sectors. For example, Korea has committed to end government-directed lending, which USTR views as a very significant trade-related commitment. These commitments, if fully implemented, could lead to increased U.S. investment in and trade with these countries. A fundamental objective of the IMF’s mission, as embodied in article I of its Articles of Agreement, is to facilitate the expansion and balanced growth of international trade. According to the IMF, trade liberalization, at both the national and global levels, is thus an integral part of structural adjustment policies incorporated in IMF programs and surveillance activities. As such, countries that have borrowed from the IMF sometimes have liberalized their trade systems within the context of their financing arrangements. Borrowers have eliminated or reduced tariffs or nontariff barriers to imports, such as import quotas, licensing, or other restrictions. They also have ended or altered export policies, such as subsidies and export restrictions. In some cases, trade liberalization measures have been IMF “performance criteria,” which are conditions that a borrower generally must meet in order to qualify for future disbursements. In many cases, however, borrowers’ trade liberalization measures were not performance criteria, although this does not mean that the IMF or the borrower considered the measures to be unimportant to achieving the objectives of the financial arrangements. According to the IMF, for some borrowers trade reform can be a critical element of structural reforms. In addition, IMF financing arrangements typically require that countries pledge not to impose or intensify import restrictions for balance-of- payments reasons. Brazil, Indonesia, and Korea have undertaken some trade liberalization within the context of their recent IMF financing arrangements. Nevertheless, their overall IMF arrangements focus on macroeconomic and structural reforms other than trade reform because restrictive trade policies were not major causes of their financial crises, according to U.S. Treasury and IMF officials. Reflecting this reality, only one of the trade liberalization measures is a performance criterion—the requirement that Indonesia reduce export taxes on logs and sawn timber. Further, although several of the import and export policies to be eliminated or modified under their IMF programs have been of concern to the United States and other countries, the stated purpose of these reforms is not to assist the four countries’ trading partners but instead it is to make their economies operate more efficiently and thus help achieve the IMF program objectives of resolving the countries’ balance-of-payments problems and preventing their recurrence. Since December 1998, Brazil made several trade commitments within the context of its IMF financing arrangements. As table 1 shows, Brazil has committed to limit the scope of its interest equalization export subsidy program to capital goods, and, according to the IMF, Brazil has kept its pledge not to impose any new trade restrictions that hinder regional integration, are inconsistent with the WTO, or that are for balance-of- payments purposes. Since November 1997, Indonesia has made many changes to its trade policies in the context of its IMF financing arrangements. As table 2 shows, Indonesia has reduced tariffs on a range of mainly agricultural products and eliminated the government’s monopoly on importation and distribution of agricultural products. Also, Indonesia has pledged to eliminate all other import and export restrictions by the end of its IMF program in the year 2000, except for those necessary for health, safety, environment, or security reasons. In March 1999 testimony, a Commerce Department official stated that the U.S. government has been satisfied with Indonesia’s efforts to date in reforming its trade system. However, the official also said that the true test of these reforms will come when increased trade flows resume. As part of its recent IMF financing arrangements, among other actions, Korea has reduced some import barriers, eliminated four trade-related subsidies, and made improvements to the transparency of its subsidy programs. Korea has met every deadline for implementing these measures, although deadlines for completing some actions have not yet passed. Table 3 shows the implementation status of trade policy measures that Korea has committed to the IMF to implement since its December 1997 IMF financing program began. In addition to trade liberalization measures, as part of their IMF financing arrangements, Korea, Indonesia, and Thailand have committed to further open their economies to foreign investment and to substantially restructure their financial and corporate sectors. These commitments, if fully implemented, could lead to increased U.S. investment in and trade with these countries. For example, Korea has eliminated the aggregate ceiling on foreign investment in Korean equities, as well as the foreign investment ceiling on domestic bonds. Other measures would facilitate friendly or hostile foreign mergers with, or acquisitions of, Korean companies, while yet others would ease restrictions in corporate foreign borrowing, the establishment of subsidiaries of foreign banks and brokerage houses, foreign direct investment, foreign acquisition of land, and foreign exchange transactions. Similarly, measures related to restructuring the financial sector would liberalize restrictions on the ability of foreign financial institutions to merge with, acquire, or invest in domestic Korean financial institutions and would allow foreigners to become bank managers. According to the IMF, the Korean economy has become much more open to foreign investment since its recent financing arrangements began. Indonesia, among other commitments, has pledged to open more sectors of its economy to foreign investment and to remove restrictions on permitting foreign banks to have branches in Indonesia. Investment liberalization could lead to more U.S. or other foreign direct or portfolio investment. This could increase trade, because trade tends to follow investment. In addition to liberalizing foreign investment, other structural reforms being implemented by Brazil, Indonesia, Korea, and Thailand within the context of their recent IMF financing arrangements could affect their trade. For example, according to the U.S. Treasury Department, under its IMF financing arrangements, Korea has agreed to a fundamental overhaul of its weak and noncompetitive financial system. Korea also has committed to end government-directed lending. Brazil, Indonesia, and Thailand are further privatizing state-owned enterprises. If implemented successfully in conjunction with foreign trade and investment liberalization, these structural reforms could have a significant effect on U.S. and other foreign trade and investment in these economies. Finally, to the extent that their IMF programs as a whole lessen the duration and severity of these countries’ economic crises, the prospects for increased foreign trade and investment would improve. The success of these programs depends on many factors, including their macroeconomic and structural policy changes. But success also depends on factors that are in part outside of the borrowers’ and the IMF’s control, such as investor confidence in the four countries’ economies and macroeconomic conditions in other countries. The policies maintained by Brazil, Indonesia, Korea, and Thailand to encourage exports could potentially distort trade and displace production by U.S. producers, even though they may benefit other U.S. companies or consumers. However, the large macroeconomic changes in these countries caused by their recent financial crises greatly complicate predicting and measuring the policies’ impact on the United States because the macroeconomic changes are likely a major reason for recent changes in trade flows. Moreover, overall U.S. imports from these nations grew modestly in 1998, and many sectors registered declines. Imports from Brazil, Indonesia, Korea, and Thailand also grew at a slower pace than overall U.S. imports and than they have in previous years. Nevertheless, in certain sectors such as steel and chemicals, the United States faces substantial and growing import competition from suppliers from one or more of the four countries. Products accounting for about16 percent of the value of U.S. imports from these four IMF borrowers registered large increases in imports and falling prices over the past year. Mechanisms exist to investigate and remedy situations, such as steel import surges, where U.S. industry believes rising imports are attributable to foreign government policy and harm its economic interests. Export policies such as subsidies to producers and low-cost financing for exports can harm U.S. companies by displacing U.S. sales in the United States and other world markets. At the same time, they may benefit U.S. consumers and other U.S. industries that use the imported products. Aside from any direct economic impact, U.S. trade law and international trade agreements such as the WTO agreements contain disciplines to limit the use of subsidies and provide remedies for harmful effects of trading partners’ export policies in specified circumstances. In a prior section, we identified export policies maintained by Brazil, Indonesia, Korea, and Thailand. Relatively few of the policies have been major sources of U.S. industry or government concern. But some have been, particularly Korea’s policies in the steel, automotive, shipbuilding, and semiconductor sectors and Brazil’s policies in the steel and automotive sectors. Brazil and Korea were among the top 10 countries cited in U.S. countervailing duty investigations into complaints over unfairly subsidized imports during 1980-97. Brazil was the top country cited, accounting for about 11 percent of all cases filed. However, accurately weighing the recent impact of export policies on U.S. industries is difficult. First, as has been seen, the United States can expect to face deteriorating trade balances and heightened competition from key IMF borrowers because of their financial crises and the accompanying sharp currency devaluations and shrinking demand in these markets. The strong performance of the U.S. economy relative to that of other nations also draws in imports. For now, U.S. output is rising, inflation is low, and unemployment is at its lowest level in 30 years. These trends provide a favorable backdrop for absorbing rising imports. Also, U.S. imports from Brazil, Indonesia, Korea, and Thailand rose at a slower pace than overall U.S. imports in 1998, and, for Brazil and Indonesia, rose by less in 1998 than they had in previous years. Indeed, substantial contractions were recorded in U.S. imports from each of the four countries in many sectors. Another factor that makes it difficult to determine the impact of export policies is that such an investigation requires considerable legal, economic, and industry information. Some of this information is readily available, but much of it must be estimated or specially collected and analyzed on a case- by-base basis. For example, the U.S. government agencies responsible for administering U.S. trade law, including the Commerce Department and the ITC, conduct in-depth investigations regarding specific allegations of improper subsidies and injurious effects on domestic industries. Still, as a general rule, the larger the distortion and the greater the trade affected, the more likely the policy could harm the U.S. industry. Brazil, Indonesia, Korea, and Thailand are leading world exporters. The U.S. market receives a substantial portion of their export shipments. Based on IMF data, the four nations account for 35 percent of the total world exports of current IMF borrowers, with Korea alone accounting for 16 percent of total exports from IMF borrowers. Recent WTO data reveal that the four countries ranked among the world’s leading exporters in 1998 and that Korea was the world’s 7th largest exporter, while Thailand, Brazil, and Indonesia ranked 15th, 16th, and 17th, respectively. Collectively, the four sold $287 billion abroad in 1998, which is more than Canada, but less than the United States and Japan. Figure 3 shows 1998 exports of Brazil, Canada, Indonesia, Japan, Korea, Thailand, and the United States. The United States is an important market for these four countries, but its importance as a buyer did not increase substantially relative to other nations in 1998. In 1998, the United States accounted for an estimated 19 percent of Brazil’s exports, 18 percent of Indonesia’s exports, and 16 percent of Korea’s exports, according to the U.S. Department of State. All of these shares were similar to those recorded in 1997 and 1996. (Some 20 percent of Thailand’s exports were shipped to the United States in 1997, the latest year for which data are available.) In 1998, the four countries together accounted for about 7 percent of both U.S. exports and imports, according to Commerce statistics. Industry analysts report that U.S. suppliers face head-on competition from all four countries in such sectors as steel and chemicals; automobiles (Korea); orange juice (Brazil); wood and paper products (Indonesia and Brazil); and poultry and pork (Thailand and Korea). However, in many product sectors, these nations compete more with each other and other nations than with U.S. suppliers. For example, Brazil competes with China, Italy, Spain, Indonesia, and Korea in footwear. Thailand competes with Mexico and the Philippines in the supply of electric wire and cables. Korea and Japan compete with U.S. producers in the United States and with each other in Asian markets for semiconductor memory devices. In other industries, such as many chemicals from Indonesia and semiconductors from Thailand, the imports are raw materials or intermediate products used in final U.S. production of higher value-added goods. The executive branch has implemented programs to detect and deter potentially harmful effects of export subsidies by these four nations (as well as certain others). These programs were developed by the Commerce Department to respond to concerns by U.S. industries. The industry concerns were twofold: that nations could use subsidies to export their way out of their financial crisis and that the IMF stabilization programs could allow these countries to resume financial practices that had previously benefited strategic industries to the possible detriment of U.S. firms and workers. Commerce’s special efforts involve (1) tracking existing and prospective policies (export or production-related subsidies) by key nations; and (2) monitoring U.S. imports in selected sectors—including steel, semiconductors, autos, paper, and chemicals—that are vulnerable to import penetration and that have faced unfair trade practices in the past. Commerce staff report that they identify import surges by examining the value, quantity, and price of imports; the share of the U.S. market that has been captured by imports (import penetration); and the level of industry concern. The result is an early warning mechanism to flag potential problems for further analysis and action, if appropriate. To shed light on whether the export policies of Brazil, Indonesia, Korea, and Thailand could pose a potential threat to U.S. producers, we supplemented the information on export policies presented in a prior section with an analysis of imports from the four IMF borrowers that showed large increases in U.S. imports in 1998. Textiles, apparel, and steel were the product categories that experienced the largest increases in imports from these countries. Other important categories were certain primary or processed agricultural and fishery products, chemicals, rubber products, wood and paper products, and electric and nonelectric machinery. The results of the multistage analysis revealed that products accounting for $9.4 billion, or 16 percent, of U.S. imports from Brazil, Indonesia, Korea, and Thailand both increased substantially and registered price declines in 1998. Table 4 shows the 62 product categories that met all of our criteria and, for each product, the percentage increase in imports from the four countries. (An additional 300 items at a more disaggregated level also met our criteria and showed substantial import increases and price declines; these items accounted for $5.3 billion in imports from the four IMF borrowers.) For example, imports of radio transmission apparatus from Korea rose by nearly 90 percent to reach a value of $788.4 million, while imports of one category of flat-rolled steel from Korea rose by 36 percent, to $355.8 million. Paper and paperboard imports from Indonesia were up by 284 percent, amounting to $40.8 million. Though we did not separately collect production statistics for these items, our examination of analyses prepared by outside industry experts suggests that the United States produces most of these fast-rising import items, although notable exceptions include certain primary products (for example, rubber) and certain machinery and consumer electronic goods. We then assessed whether U.S. industries that compete with the surging imports are particularly vulnerable to import competition. For example, we examined the tariff treatment of different import categories, including under the U.S. Generalized System of Preference (GSP) program. Under the program, certain imported products are not eligible for duty-free treatment because they are import sensitive. Most textiles and apparel, leather goods, and glass have been deemed import sensitive by statute. For other product sectors in which imports are surging, we examined industry reports and discussed the factors contributing to the increases and potential vulnerability of the U.S. industry with staff at the Commerce Department and the ITC. According to these industry sources, some of the import surges we identified are in industries where foreign unfair trade practices do not appear to be an issue, while other import surges are in industries where allegations of foreign unfair trade practices already exist, and still other import surges have a more tenuous relationship to policy or adverse impact. In some cases, the industry sources we consulted cited factors other than “unfair imports” as the primary cause of surging imports: Market factors, such as a slight increase in U.S. coffee consumption and the need for more natural rubber for the larger tires being used in U.S. motor vehicles appear to be the primary factors in increased U.S. imports. In the fishery sector, rising imports of shrimp from Indonesia and Thailand appear to be tied to the strong U.S. economy; virtually all shrimp imported into the United States is destined for restaurant consumption, which has risen with U.S. incomes. Other increases are explained by resource endowments; for example, the United States is consuming more natural dyes and fragrances that are only available from nations with rain forest conditions, such as Brazil. Industry reports also suggest that a variety of factors are at play in many sectors that heighten competitive pressures on U.S. firms, including the ongoing globalization of production, the emergence of new competitors in Asia and elsewhere, and the price pressures that ensue from falling demand and excess capacity (some of which preceded the crisis). Some industries are calling for forceful action and strong enforcement of U.S. trade laws. Other industries, such as chemicals and forest products, say the most helpful U.S. government response would be pursuit of lowered trade barriers in these countries to provide new opportunities to U.S. exporters. Some investigations into complaints over harm from export policies by Brazil, Indonesia, Korea, and Thailand are currently underway under U.S. trade statutes. In addition, export policies of Brazil and Indonesia have been subject to dispute settlement procedures in the WTO. Steel is the sector with the largest number of cases pending under U.S trade law. Overall U.S. imports of steel were up by 9 million metric tons in 1998, and imports captured 30 percent of the U.S. market, up from 24 percent in 1997. Various cases involve Brazilian, Indonesian, and Korean suppliers, as well as suppliers in Russia and Japan. Korea’s POSCO is the world’s second largest steel firm, and Brazil is among the top five U.S. import suppliers of steel. On January 7, 1999, the President outlined a seven-point action plan for responding to the rise in steel imports. Various plastic and rubber goods and textiles are also under investigation. Semiconductors and other microelectronic products have been subject to dumping and intellectual property right infringement in the past; the executive branch continues to monitor imports, and Korea is among the top five U.S. import suppliers of microelectronics (including semiconductors). In addition, Brazil’s aircraft subsidies were recently found to be inconsistent with WTO rules. The United States is a major consumer, not a producer, of these regional jets but has had long-standing concerns over Brazil’s export financing program, which applies to other sectors. In some recent countervailing duty cases, the U.S. Commerce Department determined the magnitude of the subsidies provided to be fairly small. Within the past 9 months, Commerce has found subsidies to Indonesian producers of rubber thread to be less than 3 percent of the thread’s value, and countervailable subsidies of 6.62-9.45 percent for Brazilian hot-rolled steel. Subsidies for Korean stainless steel and strip were somewhat larger, up to 29 percent. In certain cases, the ITC has determined that imports were not causing injury to U.S. industry. In April 1999, for example, the ITC made a negative injury determination regarding synthetic rubber from Korea, Brazil, and Mexico, and in May 1999 the ITC made a negative injury determination in a case involving stainless steel round wire from Korea and other countries. The ITC is conducting fact-finding investigations of imports of forest products at the Congress’ request. ITC analysts suggest that U.S. suppliers face competition from hardwood plywood, and printing and writing paper from Indonesia; our data show paper imports are rising rapidly and prices are down. Commerce analysts report that the forest product industry employs more workers than the steel industry and some mills in the Northwest have recently closed in the face of weak demand and falling prices. Industry has reportedly expressed concern that rising imports from Indonesia may be due to unfair trade practices but has yet to file a formal case. Pulp imports from Brazil are also up but are reportedly from the Brazilian production facilities of U.S. firms. Textiles and apparel imports are increasing sharply, even though U.S. limits on the quantity imported (quotas) are in place. A few instances of investigations into “unfair trade” in textiles have occurred, including textile products from Thailand and a recently filed petition alleging dumping of polyester staple fiber from Korea. However, Commerce analysts report that in general the surges that occurred in the past 2 years appear to be caused by market forces and exacerbated by the financial crises that began in mid-1997, rather than government policies. Brazil, Indonesia, Korea, and Thailand are all WTO members and have bilateral quota agreements with the United States that establish comprehensive limits on virtually all categories of their textile and apparel exports to the United States. While these limits apparently had considerable room for growth, imports from Indonesia have fallen sharply in recent months as shipments approached the upper limits associated with such quotas. Sugar from Brazil and imports of rice from Thailand are among the agricultural and fishery products with rising imports and falling prices. Governmental policies exist in these two sectors but do not appear to be major factors in the rise. (In Brazil’s case, other factors are at work, and in Thailand’s case, the program involves government-to-government sales, which do not occur for the United States). However, the United States has identified Thailand’s subsidies on some government-to-government sales of rice in its annual inventory of foreign trade barriers. Orange juice imports from Brazil also rose considerably in 1998, but much of the rise appeared to be due to weather, which contributed to a bumper crop in Brazil and a poor crop in Florida, where 90 percent of U.S. orange juice is produced. Chemical imports are causing price pressures on U.S. producers in the United States and other country markets. The 70-year record of U.S. surpluses in the chemicals trade was unbroken in 1998 but fell by nearly a third from 1997 levels, largely as a result of lower U.S. exports to Asia and other developing regions and higher U.S. imports from the EU. Industry analysts attribute most of the worsening to collapsing demand in Asia, which depressed U.S. and EU sales there. (U.S. exports of chemicals to Asia fell by more than 15 percent from 1997 to 1998.) However, capacity expansions that reflect both ongoing globalization of production activity by U.S. and other firms and government policies in such nations as Korea and Thailand preceded the onset of the crisis. For example, the chemical industry is the leading manufacturing sector recipient of loans from the Korean Development Bank, and Korea’s production capacity in the chemical industry rose by more than 27 percent between 1995 and mid- 1998. Even so, Korea supplied just 1.3 percent of total U.S. imports of chemicals in 1998. In autos, competition to U.S. firms from Korean auto exports is rising. The 25 percent plunge in domestic demand in Korea in 1998 halved domestic shipments. Production fell by 30 percent, and Korean auto makers were forced to turn increasingly to overseas markets for sales. According to statistics by the Korean Automobile Manufacturers Association, fully 75 percent of Korean cars were exported in 1998, versus 50 percent the year before, and the total number of units exported rose slightly. The U.S. market is Korea’s second largest for car exports, but Commerce officials report that competition with U.S. makers is particularly intense in European markets. Meanwhile, despite Korea’s compliance with a bilateral agreement with the United States on Korean market access for autos, there has been a virtual halt of import purchases in Korea’s shrinking market. Auto parts imports from Brazil are also increasing and could in principle be related to government policies, which require firms that make cars in Brazil to meet minimum export performance and local content levels in order to receive tax and other benefits. However, in accordance with a bilateral agreement with the United States, the Brazilian government policy is due to change by January 1, 2000, and Commerce officials we contacted were unaware of current complaints by U.S. industry. The few products that show substantial import increases appear to be original equipment parts made in Brazil and destined for their U.S. auto manufacturing facilities. Imports of pianos, string, and other musical instruments also show large increases. The ITC recently released a report analyzing factors contributing to rising imports from Asian suppliers. However, the ITC reports that there were no claims that the rising imports were due to export policies of those countries. The situation in the tire and synthetic rubber industries shows how firm structure, customers’ responsiveness to price, and the globalization of sourcing affect industry attitudes toward surging imports. Three of the four companies making tires in the United States are multinational firms that produce and sell tires globally; the three control 65 percent of the world tire market and reportedly have increased production and imports from such countries as Indonesia since mid-1997, when the rupiah (Indonesia’s currency) plummeted. A fourth firm sells all of its production in the larger U.S. retail (consumer) market, where Korean, and to a lesser extent, Brazilian firms, compete largely on the basis of price. This firm is concerned about the 60 percent increase in imports of Korean tires. The firm has, however, filed briefs opposing findings of dumping against Brazilian and Korean suppliers of synthetic rubber because it needs such low-cost inputs to remain competitive with tires from Korea, Indonesia, and Brazil. We requested comments on a draft of this report from the Departments of the Treasury, Commerce, and State; the IMF; the Office of the U.S. Trade Representative; and the ITC. The Treasury provided written comments on a draft of this report, which are reprinted in appendix III. The comments characterized the report as balanced and informative. All six organizations also provided technical and clarifying comments, which we incorporated as appropriate. For example, the IMF and USTR asked that we clarify the role that trade liberalization plays in IMF financing arrangements. At the IMF’s suggestion, we have pointed out that facilitating the balanced growth of international trade is part of the IMF’s core mission as embodied in its Articles of Agreement, and that, according to the IMF, trade liberalization is an integral part of IMF programs and surveillance activities. At USTR’s request, we have noted that, in addition to trade and investment liberalization, other policy measures that Brazil, Indonesia, Korea, and Thailand are taking under their IMF financing arrangements to restore economic stability should also contribute to market opening; for example, Korea has committed to end government-directed lending. We are sending copies of this report to Senator Connie Mack, Chairman, and Senator Charles Robb, Ranking Minority Member, Joint Economic Committee; Senator William Roth, Chairman, and Senator Daniel Moynihan, Ranking Minority Member, Senate Committee on Finance; Senator Phil Gramm, Chairman, and Senator Paul Sarbanes, Ranking Minority Member, Senate Committee on Banking, Housing, and Urban Affairs; Representative Benjamin Gilman, Chairman, and Representative Sam Gejdensen, Ranking Minority Member, House Committee on International Relations. We are also sending copies of this report to the Honorable Robert Rubin, the Secretary of the Treasury; the Honorable Madeleine Albright, the Secretary of State; the Honorable William M. Daley, the Secretary of Commerce; the Honorable Charlene Barshefsky, the U.S. Trade Representative; the Honorable Jacob Lew, Director, Office of Management and Budget; the Honorable Allan Greenspan, Chairman of the Federal Reserve; and the Honorable Michel Camdessus, the Managing Director of the IMF. Copies will be made available to others upon request. This report was prepared under the direction of Harold J. Johnson, Associate Director, International Relations and Trade Issues, and Susan S. Westin, Associate Director, Financial Institutions and Markets Issues. Please contact either Mr. Johnson at (202) 512-4128 or Ms. Westin at (202) 512-8678 if you or your staff have any questions about this report. Other GAO contacts and staff acknowledgements are in appendix V. The U.S. government has focused considerable attention in the last 3 years on eliminating or modifying certain import policies in Brazil, Indonesia, Korea, and Thailand that had restricted U.S. exports to those countries. The United States has had more concerns about Korea’s import policies than about the other three countries in our review. For example, the United States has invoked World Trade Organization (WTO) dispute settlement procedures against Korean policies concerning beef, distilled spirits, airport procurement procedures, and import clearance procedures. In Brazil, the United States was involved as a third party in a WTO dispute over Brazilian policies that allegedly discriminated against automobile imports and that restrict the availability of import financing. In Indonesia, the main U.S. concern has been over protection of intellectual property rights (IPR). In Thailand, U.S. priorities have included high import duties on certain agricultural and food products, high automobile tariffs, inadequate protection of intellectual property rights, and inefficient customs operations. Korea has historically been considered one of the most difficult export markets in the world because of its many market access barriers. Even before its 1997 financial crisis and the establishment of financial arrangements with the International Monetary Fund (IMF), however, Korea had already begun to address some of its trade barriers because of its growing international trade links. These links, which implied a stronger reliance on international trade rules and principles, have gradually encouraged a more active role for Korea in international trading organizations that require greater market openness and trade liberalization among their members, particularly the WTO and the Organization for Economic Cooperation and Development (OECD), which Korea joined in 1996. The United States has identified a wide range and number of barriers that impede the import of U.S. goods and services into Korea. Within the last 3 years, U.S. government agencies have been particularly active in reporting on and trying to address Korean import barriers related to the following practices: Pharmaceuticals: Korea’s treatment of foreign, research-based pharmaceuticals is one of the top priorities on the U.S. trade agenda with Korea. The Office of the U.S. Trade Representative (USTR) named pharmaceuticals trade issues as a bilateral trade expansion priority in a 1999 report to Congress. Under its national health insurance system, Korea does not give national treatment to imported drugs in terms of listing and pricing on the system’s reimbursement schedule. The current system discourages medical providers from dispensing imported drugs by allowing them a higher profit margin from reimbursement for domestic drugs and by requiring additional administrative procedures for reimbursement from imported drugs. According to USTR, U.S. pharmaceutical producers also face nonscience-based requirements for clinical testing, inadequate and ineffective protection of test data against unfair commercial use, and lack of coordination between Korean health and IPR authorities that allows patent infringement. In response to high- level bilateral consultations and correspondence, the Korean government has indicated that it is taking steps to address some of the U.S. government’s and industry’s concerns. According to a U.S. Commerce Department official, Korea has also agreed to reimburse medical providers for imported drugs in the near future. The executive branch is continuing to work with the Korean government to address concerns related to trade in pharmaceuticals. Beef Market Access: Korea restricts the quantity, distribution, and display of imported beef through a variety of measures, including requirements that imported beef be sold in separate retail establishments and be imported by certain designated entities. Since 1990, the U.S. government has negotiated several agreements with Korea that provide for annually increasing market access levels for beef imports; guarantee direct commercial relations between foreign suppliers and Korean retailers and distributors; and ensure that increasing volumes of beef would be sold through commercial channels instead of through a quasi-government agency. Korea has also pledged to remove all nontariff barriers on beef by 2001. In 1997 and 1998, however, Korea did not meet its quota commitments on the importation of foreign beef. In February 1999, after failing to reach agreement with Korea on reforming its beef importation practices, the United States initiated WTO dispute settlement procedures alleging that Korean regulations discriminate against and constrain opportunities for the sale of imported beef in Korea. The United States also alleged that Korea imposes sale markups on imported beef, limits import authority to certain groups, and provides domestic support to the Korean cattle industry in amounts that cause Korea to exceed its aggregate measure of support as reflected in Korea’s WTO tariff reduction schedule. A panel to consider the matter was established in May 1999. Australia also initiated WTO dispute resolution procedures against Korean beef practices on April 13, 1999. Airport Procurement Procedures: Foreign companies had traditionally been limited in their opportunities to bid on government procurement contracts until Korea became a signatory to the WTO Government Procurement Agreement (GPA). During negotiations over Korea’s accession to this agreement, the U.S. government reportedly received a commitment from Korea that entities responsible for airport construction would be subject to GPA disciplines. However, soon after negotiations were concluded, Korea created another entity--the Korea Airport Construction Authority--to manage procurement for the new Inchon international airport, one of the largest public works projects in Asia. The Korean government has subsequently changed the construction authority to the Inchon International Airport Corporation. Korea now asserts that, because neither the airport construction authority nor the airport corporation are expressly listed as covered entities in its GPA schedule of concessions, procurement for the Inchon international airport is not covered by the GPA. USTR reports that U.S. firms have repeatedly faced discriminatory tendering practices that hamper their ability to compete effectively for related procurement practices in the airport project. In February 1999, the United States requested consultations with Korea under WTO dispute settlement procedures. In May, the United States requested the establishment of a WTO dispute settlement panel on Korea’s procurement practices after WTO consultations held on March 17 failed to resolve the issue. Anti-import Activities: Over the years, the U.S. government has reported that frugality campaigns by Korean civic groups and media organizations have encouraged Koreans to avoid imported products and services and that the campaigns may have involved some Korean government support. In addition, the U.S. government has identified some Korean government practices that have specifically targeted imports. For example, in the past, the Korean government selected Korean lessors of imported automobiles for tax audits. Since the spring of 1997, the Korean government has publicly announced that it does not support anti-import activities and has promulgated guidelines to its officials on ensuring nondiscrimination against imports. In addition, the Korean president has urged Koreans to base their purchasing decisions on price and quality, rather than on the country of origin of the goods, and a 1998 U.S.-Korean auto memorandum of understanding states that the Korean government will effectively and expeditiously address all instances of anti-import activity associated with motor vehicles. The U.S. government, however, continues to watch for reports of anti-import activity, and raises instances of such activity with the Korean government. Motor Vehicles: As a result of market access barriers in the automotive sector, foreign automobiles comprised less than 1 percent of the Korean motor vehicle market in 1998, compared to about 6 percent in Japan, over 25 percent in the European Union (EU), and about 30 percent in the United States. In an October 1997 report to the Congress, the United States identified Korean barriers to motor vehicles as a priority foreign country practice, the elimination of which is likely to have the most significant potential to increase U.S. exports. Although the United States and Korea had already signed a memorandum of understanding on improving market access for foreign motor vehicles in September 1995, the United States had subsequently failed to reach agreement with Korea over remaining market access concerns. The concerns involved tariff and tax disincentives on imports, onerous and costly auto standards and certification procedures, automobile financing restrictions, and a pervasive anti-import climate for imported vehicles. After a U.S. Section 301 investigation and bilateral negotiations over these concerns, the United States and Korea concluded a memorandum of understanding in October 1998 to improve market access for foreign motor vehicles in Korea. Under the agreement, Korea agreed to broaden coverage of the 1995 memorandum of understanding to include minivans and sport utility vehicles; streamline Korean standards and certification procedures and adopt self-certification procedures by 2002, lower and/or eliminate taxes on automobiles, bind Korean tariffs on vehicles in the WTO at 8 percent (formerly, Korea’s tariff was 80 percent), introduce secured automobile financing, and implement a program to improve public perceptions of foreign automobiles. The executive branch is monitoring Korea’s compliance with the agreement. Distilled Spirits Taxes: Korea applies lower taxes to its domestically produced distilled spirit, called “soju,” than to imported alcoholic beverages. As a result of various Korean taxes and tariffs on foreign distilled spirits, the tax burden on imported liquor is higher than that for soju. In fact, according to the U.S. government, the tax burden on U.S. whiskey in Korea is more than four times greater than that on soju. In 1997, the United States and the EU brought the matter to the WTO, arguing that Korea levied discriminatory taxes against imported distilled spirits. Both the WTO dispute settlement panel in July 1998 and the WTO appellate body in January 1999 ruled in favor of the United States and the EU in the case. In March 1999, Korea informed the WTO that it was considering options for implementing the WTO’s recommendations. In April 1999, the United States and Korea requested that the period of time for Korea to implement these recommendations be determined by arbitration. Korea requested 15 months, which the United States and the EU opposed. The arbitrator subsequently determined that Korea had 11.5 months to comply with its WTO commitments in this case. Movie Screen Quotas: By requiring Korean movie theaters to show domestic Korean films at least 106 or 146 days each year, Korea in effect imposes a quota on foreign films, thereby deterring trade in films and cinema construction and the expansion of theatrical distribution in Korea. The U.S. government has repeatedly raised this issue with the Korean government, including during a March 1999 trade mission to Korea. Currently, this issue is under discussion in negotiations over a bilateral investment treaty. Intellectual Property Rights: IPR-related concerns in Korea have involved limited retroactive copyright protection, incomplete trademark laws; inconsistent interpretation and implementation of patent laws; software piracy; production and export of counterfeit goods; and deficient laws on countering unfair competition and protecting trade secrets. Although Korea remained on the U.S. government’s Special 301 “watch list” in 1997, 1998, and 1999, the U.S. government acknowledges that Korea has made significant efforts to strengthen its IPR laws and enforcement. For example, pursuant to its obligations under the WTO Agreement on Trade- Related Aspects of Intellectual Property Rights (TRIPS), Korea passed four acts on patents, utility models, designs, and trademarks in 1995 and implemented new copyright, computer software, and customs laws in 1996. In March 1998, Korea’s revised trademark law became effective and a new patent court was established. Nevertheless, in negotiations over a bilateral investment treaty, the U.S. government has asked Korea to resolve some remaining inconsistencies involving its TRIPS obligations. For example, according to USTR, Korea still does not provide full retroactive protection to existing copyrighted works. Similarly, Korea’s trademark law still does not protect some famous U.S. cartoon characters because they have not been registered with Korean authorities. Also, the U.S. government has raised Korea’s failure to provide TRIPS-consistent data protection and full coordination between Korea’s IPR and health authorities to preclude patent infringements. Telecommunications: U.S. equipment and services companies have traditionally encountered a range of market barriers in the Korean telecommunications sector. The United States first cited Korea in 1989 as a priority foreign country for trade barriers in the telecommunications field involving discriminatory procurement practices, “buy local” policies, lack of transparency (openness), and inadequate trade secret protection. Despite a 1992 bilateral agreement and a 1993 exchange of letters addressing Korea’s telecommunications trade barriers, in July 1996 the United States designated Korea as a “priority foreign country” under Section 1374 of the Omnibus Trade and Competitiveness Act of 1988. Subsequent bilateral negotiations resulted in a July 1997 agreement in which Korea agreed to implement a range of policies to address remaining U.S. concerns and enhance U.S. market access. These policies included national treatment for foreign companies; government nonintervention in private sector procurement; increased transparency in criteria and procedures relating to services licensing, equipment certification, and type approval; increased foreign ownership in domestic service providers; enhanced protection of intellectual property and proprietary information; clear guidelines for technology transfer; transparent procedures for satellite services authorization; procompetitive regulatory measures; and an enhanced independent regulatory role for the Korean Communication Commission. Korea also agreed to eliminate tariffs on information technology products and to increase limits on foreign ownership of domestic telecommunications services companies. As a result of the agreement, the United States revoked Korea’s priority foreign country designation as of August 1997. The United States is continuing to monitor Korea’s implementation of the agreement as well as U.S. industry concerns over possible Korean government involvement in promoting the consolidation of private cellular telecommunications operators and wire- line companies under current conglomerate restructuring plans. Financial Services: Korea has traditionally restricted foreign participation and involvement in its insurance, banking, and securities sectors. However, Korea has been liberalizing many of these restrictions in recent years, particularly in the context of its WTO, OECD, and IMF commitments. According to the U.S. Treasury Department, under its IMF financing arrangements, Korea has agreed to a fundamental overhaul of its weak and noncompetitive financial system. The prudential regulatory framework is being strengthened and restructured, and banks and other financial institutions are now expected to operate in a more transparent and financially sound manner. Additionally, Korea committed to the IMF to make its OECD commitments on financial services liberalization part of its WTO commitments, which would make them subject to the WTO’s binding dispute settlement mechanism. For the insurance industry, Korea included expanded market access and national treatment of foreign insurers in its WTO schedule of liberalization measures as part of the 1997 WTO financial services agreement. Similarly, in consultation with the IMF and the World Bank, Korea is implementing considerable structural reform in its banking sector to ensure that it operates on a fully commercial basis. The Korean government has also committed to the IMF to refrain from interfering in bank lending or managing decisions, to open its capital markets significantly to foreign participation, to permit foreign financial institutions to participate in mergers and acquisitions of Korean financial institutions, to allow foreign banks to establish subsidiaries or branches in Korea, and to liberalize foreign exchange controls. Under its IMF financial arrangements, Korea is also implementing considerable liberalization of its securities market by removing or lifting ceilings on foreign investment in Korean stocks, bonds, or commercial paper. Import Clearance Procedures: The U.S. government reports that Korea’s import clearance procedures often delay entry of U.S. imports into Korea. For example, certain sanitary and phytosanitary barriers frequently delay some U.S. agricultural and food exports from entering Korea for 2 to 4 weeks, and sometimes up to 2 months, except for perishable fruits and vegetables, which take a maximum of 5 days. Problems with import clearance procedures involve Korea’s ingredient listing requirements, sanitary and phytosanitary rules, standards and conformity assessment procedures, and arbitrary actions by Korean inspectors. Korea has addressed some of these issues in response to U.S.-initiated WTO dispute settlement procedures. Specifically, Korea agreed to expedite clearance procedures for fresh fruits and vegetables, to use the concept of scientific risk assessment in developing a quarantine pest list and setting fumigation requirements, to revise some of its food additive standards to bring them closer to international standards, and to eliminate sorting requirements and requirements on ingredients listing by percentage for all ingredients. Under its IMF financial arrangements, Korea also presented a plan in August 1998 to streamline various import certification procedures and bring them in line with international practices. Cosmetics: The U.S. government has identified several impediments to the entry and distribution of foreign cosmetic products in Korea. These include requirements for the Korean Food and Drug Administration to approve imports of the same cosmetic products if they have different countries of origin, the Korean government’s delegation of authority to the domestic industry association to screen advertising and information brochures, the mandatory provision of proprietary information on imported cosmetics to Korean competitors, redundant testing, restrictions on sales promotions involving gifts with purchases, and burdensome import authorization and tracking requirements. The executive branch cited Korea’s cosmetics-related trade barriers as a bilateral priority in a 1997 report to the Congress because the Korean government had not fully addressed U.S. concerns despite consultations between the two governments. In January 1998, the Korean Food and Drug Administration abolished the annual testing requirement for imported cosmetics and authorized importers to perform the required self-testing. Nevertheless, significant delays still remain for final government approval for the local sale of products developed outside of Korea, and cosmetics are still subject to the same rigorous and time-consuming approval process as pharmaceuticals and nutritional supplements. The U.S. government is working in conjunction with the EU to address cosmetics trade issues with the Korean government. According to the Brazilian government, trade liberalization is a key element in its efforts to consolidate the country’s economic stabilization process. Brazil’s economic liberalization—initiated in 1990 and accelerated with the Real Plan in 1994—has resulted in a more open trade regime with generally lower tariffs and reduced nontariff barriers. Alongside its liberalization efforts, Brazil has pursued further economic integration through MERCOSUL (South America’s common market) and negotiations to establish the Free Trade Area of the Americas. The 5-year-old Real Plan, introduced after nearly a decade of economic stagnation and periods of hyperinflation, was the key element underpinning Brazil’s efforts to stabilize its economy. Access to Brazilian markets in a significant number of sectors is characterized as generally good—with competition and participation by foreign firms through imports, local production, and joint ventures. However, some key liberalization measures introduced by the government of Brazil since 1995 have not been fully implemented—including some measures to eliminate government monopolies and to remove the distinction between foreign and national investors. In addition, the Brazilian government implemented temporary restrictive measures during 1996-98 to slow increasing trade deficits. Since 1990, Brazil has relied primarily on tariffs to regulate imports, rather than on nontariff barriers. Although Brazil’s average import tariff increased from about 12 percent in 1996 to about 15 percent in1998, it remained significantly below the 1990 level of 32 percent. Within the last 3 years, U.S. government agencies have been particularly active in reporting on and addressing trade barriers related to Brazilian protection of IPR, import financing restrictions, phytosanitary restrictions on wheat, discriminatory automobile policies, and customs valuation practices and import licensing system. Intellectual Property Rights: In April 1993, USTR identified Brazil as a priority foreign country under “Special 301” because Brazil failed to provide adequate and effective intellectual property rights protections. Later that year (May 1993), USTR initiated a Section 301 investigation of Brazil’s IPR regime and requested consultations. As a result of Brazil’s commitment to improve the protection of intellectual property and provide greater market access for intellectual property products, USTR terminated its investigation in early 1994 and removed Brazil’s designation as a priority foreign country. However, because of Brazil’s lack of progress in implementing changes to its IPR regime, Brazil was placed on the priority watch list in April 1995. Subsequent improvements in IPR protection resulted in Brazil being first moved down to the watch list in 1996 and eventually being removed from the list entirely in 1997, when a series of IPR laws was promulgated. While the new laws represent progress in Brazil’s IPR regime, deficiencies in the TRIPS-consistency and enforcement of some of these laws resulted in Brazil being placed back on the watch list in 1999. Specifically, USTR has identified problems with Brazil’s Industrial Property Law, which includes a domestic working requirement for patents that is not consistent with TRIPS. In addition, USTR reported that Brazil’s uneven enforcement of copyright laws is a serious and growing concern. Deficiencies in the Brazilian government’s efforts to improve copyright enforcement have contributed to increasing piracy rates. Problems were particularly acute with respect to sound recordings and videocassettes—with virtually all audiocassettes sold in 1998 being pirated copies. Overall, the sound recording industry saw its piracy losses double in 1998. The U.S. government contends that the Brazilian government’s efforts to patrol its border and ports have been inconsistent (a significant amount of the pirated material enters Brazil through Paraguay) and that the Brazilian government has not provided police the tools or training to enforce the laws. Furthermore, proposed legal changes that could reduce criminal penalties for intellectual property crimes and remove police authority to initiate some searches and seizures have become a particular concern for the U.S. government. According to USTR, Brazil’s generally inefficient courts and judicial system have complicated the enforcement of intellectual property rights. The U.S. executive branch believes that Brazil should increase fines so as to create a true deterrent to copyright infringement, increase the effectiveness of the criminal enforcement system, and decrease delays in the judicial process. Import Financing Restrictions: In April 1997, Brazil-imposed requirements effectively prohibited import financing for less than 180 days on purchases from non-MERCOSUL countries and raised costs for any import financing of less than 1 year. Specifically, Brazil required importers to purchase foreign exchange for financing purposes at least 180 days in advance of the due date for short-term supplier credit (that is, less than 360 days in duration). Brazil also prevented export credit agencies such as the U.S. Export-Import Bank from offering short-term credits for certain categories of purchases (for example, raw materials, spare parts, and others). According to a Commerce Department official, these restrictions were implemented as a reaction to Brazil’s burgeoning trade deficit and to combat currency speculation. It is estimated that these measures added 3 to 5 percent to the cost of affected imports. The U.S. government raised its concerns bilaterally with the Brazilian government regarding the WTO- consistency of this policy and joined as a third-party observer in the March 1998 WTO dispute settlement consultation between Brazil and the EU. The EU requested consultations with Brazil in January 1998. Although WTO consultations are still pending, Brazil eliminated its import finance restrictions in March 1999 for most practical purposes, according to the Commerce Department. Phytosanitary Restrictions on Wheat: The access of U.S. wheat to the Brazilian market was removed in September 1996, when the government of Brazil effectively banned U.S. wheat imports due to concerns about the wheat fungus Tilletia controversa Kuhn. Prior to 1996, U.S growers exported about 750,000 tons of wheat to Brazil—a leading importer of wheat. However, the United States and Brazil reached agreements on U.S. hard red winter wheat after Brazil eliminated its phytosanitary restrictions on this type of wheat in April 1998. Brazil’s decision was based on strong scientific evidence presented in a pest risk assessment. Although Brazil’s government published an executive order to allow entry of U.S. hard red winter wheat into Brazil in November 1998, the United States has not made any wheat sales to Brazil since the executive order was signed. The United States continues to work bilaterally with Brazil to resolve outstanding issues that restrict market access for other types of wheat as well as other U.S. exports such as poultry. Automobile Program: In December 1995, Brazil enacted an auto program that offers automobile manufacturers reduced import duties on automobiles and automobile parts, and other benefits if they export certain quantities of parts and vehicles and meet local content targets in their Brazilian plants. This program adversely affects U.S. exports of autos and auto parts to Brazil by distorting investment, sourcing, and production decisions. The United States also believes that the program violates the WTO’s provisions on trade-related investment measures. As a result, the United States requested WTO dispute settlement consultations with Brazil on these measures in August 1996. In October 1996, USTR initiated a Section 301 investigation of Brazil’s practices. In January 1997, USTR requested additional consultations with Brazil in the WTO, focusing specifically on new aspects of its auto regime that were introduced following the earlier consultations. These included tariff rate quotas for Korea, Japan, and the EU, and incentives to establish production facilities in specific regions of Brazil. The United States and Brazil signed an agreement settling the dispute in March 1998, and USTR terminated its investigation. In this regard, Brazil committed to eliminate the trade- and investment-distorting measures in its auto regime by December 31, 1999, and agreed not to extend the WTO trade-related investment measures to MERCOSUL partners when they unify their auto regimes in the year 2000. Currently, USTR is monitoring Brazil’s implementation of the March 1998 agreement, and Brazil is negotiating with its MERCOSUL partners to establish a new auto regime. The U.S. government is monitoring Brazil’s MERCOSUL negotiations. Customs Valuation and Import Licensing: In January 1997, Brazil’s Secretariat of Foreign Trade implemented a computerized trade documentation system to handle import licensing. According to USTR, as of January 1, 1999, the system charged a fee of Real$30 per import statement and Real$10 per product added to the statement. An increasing number of products are exempt from automatic licensing. In addition, beginning in October 1998, Brazil issued a series of administrative measures that required additional sanitary and phytosanitary, quality, and safety approvals from various Brazilian government entities for products subject to nonautomatic licenses. The October measures and the use of minimum price lists in conjunction with licensing have been characterized by Brazil as a deepening of its existing import licensing regime and as part of a larger strategy to prevent under-invoicing. However, according to USTR, the use of minimum price lists raises questions about whether Brazil’s regime is consistent with its obligations under the WTO, and these practices have proven to be a barrier to U.S. exports. According to U.S. government and WTO sources, in recent years Indonesia has liberalized its foreign trade and investment systems and has taken a number of important steps to reduce protection. The Indonesian government has done so by issuing periodic deregulation packages that have incrementally reduced overall tariff levels, simplified the tariff structure, replaced nontariff barriers with more transparent tariffs, and encouraged foreign and domestic private investment. According to USTR, Indonesia’s average unweighted tariff has fallen to 9.5 percent from 20 percent in 1994, and about 160 tariff lines remained subject to restrictive import licenses, down from 1,112 lines in 1990. A November 1998 WTO report on Indonesia’s trading system commended Indonesia for its trade and investment liberalization. However, the report noted that the pace of trade and investment liberalization had slowed during 1994-96. It added that, prior to its financial crisis, Indonesia had made limited progress in removing nontariff import barriers and export restrictions and that liberalization in agriculture and forestry had lagged reforms in other sectors. Despite this progress, Indonesia still maintains a number of restrictions to imports and foreign investment, according to the U.S. government and the WTO. In recent years, Indonesian barriers to imports included high tariffs on certain items; quantitative restrictions on some agricultural and other goods; and barriers to service imports, including restrictions on wholesale and retail distribution. Barriers to foreign investment have included restrictions and prohibitions in certain sectors, such as film and video distribution and forest concessions. Since 1996, the most prominent import barrier issue between the United States and Indonesia has concerned Indonesia’s IPR protection. Since April 1996, Indonesia has been on the U.S. government’s priority watch list for inadequate intellectual property protection. The U.S. executive branch has cited the following reasons for this designation: (1) trademark infringement, including software, book, video, videocassette disk, drug, and apparel piracy; (2) audiovisual market access barriers; (3) inconsistent enforcement and ineffective legal system; and (4) amendments to the copyright, patent, and trademark laws that the U.S. government believes are not fully consistent with Indonesia’s obligations under the WTO TRIPS agreement. In June 1998 the U.S. executive branch presented to the Indonesian government a plan for improving IPR protection that could result in Indonesia’s removal from the priority watch list. However, according to USTR, Indonesia has not been able to devote significant resources to improving or enforcing its IPR regime due to its severe economic crisis. Thailand’s average tariff rate in 1998 was about 18 percent. In addition, as one of the 10 members of the Association of Southeast Asian Nations (ASEAN), Thailand has pledged to reach and maintain tariffs on trade with its ASEAN partners of between 0 and 5 percent by 2003. Generally, the Thai government has continued to lower tariff rates pursuant to goals established in 1994. However, USTR and other U.S. government agencies have identified several of Thailand’s trade policies and practices that affect U.S. exports to Thailand, such as weak IPR enforcement. These barriers include the following: Inadequate Protection of Intellectual Property Rights: This is the leading trade issue between the United States and Thailand. In this regard, USTR initiated Section 301 investigations in 1990 and 1991 regarding Thailand’s lack of adequate protections over intellectual property. Both investigations found Thailand’s copyright and patent protections to be unreasonable and burdensome to U.S. commerce. Thailand made significant improvements to its IPR legal regime and enforcement efforts in the 1990s. Despite this progress, Thailand has remained on the U.S.’ Special 301 “watch list” since November 1994 because of long-standing IPR enforcement weaknesses. According to USTR’s 1999 National Trade Estimate report, the U.S. copyright industry estimates it lost nearly $200 million from intellectual property rights infringements in Thailand. In response to these concerns, the Thai government implemented a series of legal reform initiatives, established a special Intellectual Property and International Trade Court, and concluded an intellectual property enforcement action plan with the United States. However, U.S. government officials maintain that significant enforcement problems remain, piracy rates continue to climb, and monetary penalties or jail sentences are rarely imposed to deter such crimes. In February 1999, a new enforcement strategy was implemented, but at the time of our report no information regarding the success of this effort was available. High Tariffs on Automobiles: In addition to currently applied domestic auto sector protections (local content restrictions), which must be removed by January 1, 2000, pursuant to Thailand’s commitments under the WTO agreement on trade-related investment measures, Thailand imposes significantly high tariffs on automobiles. While Thailand’s overall average tariff rate is relatively low when compared with its ASEAN neighbors, its tariffs on automobiles remain high at 80 percent. However, Thailand’s automobile tariffs have never risen to an actionable level, in part because Thailand’s tariffs are bound in the WTO, and Thailand actually applies lower tariffs ranging from 42.5 to 68.5 percent. Furthermore, some U.S. car manufacturers assemble automobiles in Thailand, thus avoiding the higher tariffs. These manufacturers, however, pay tariffs up to 35 percent on automotive parts imports. Thailand recently announced its latest plans to bring its national car policy into conformity with its agreement on trade-related investment measures obligations as required by January 1, 2000. The plans are being studied by U.S. government officials. Inefficient Customs Operations: USTR and the State Department report that Thailand’s customs clearance processes are arbitrary, irregular, and inefficient. In 1997, the United States and nine other chambers of commerce, including Japan’s, vigorously and publicly complained about Thailand’s customs procedures. The U.S. government is concerned about excessive paperwork and formalities, lack of coordination between customs and other import-related agencies, and lack of modern computerized processes. However, Thailand has made progress in reforming some areas of its customs operations, such as express shipment handling, payment procedures, and document simplification. The U.S. embassy in Bangkok, the U.S. Customs Service, the IMF, and others have provided the Thai government with technical assistance to improve the customs clearance process. High Duties on Certain Agriculture and Food Products: Specific duties for most agricultural and food products, with the exception of wine and spirits, no longer exist, but import duties on high-value fresh and processed foods remain high at about 60 percent. As a signatory to the WTO, Thailand committed to reduce tariffs and began to do so in 1995. However, by the end of the tariff reduction phase-in period in 2004, duties will still be in the 30 to 40 percent range for most consumer-oriented food products, with the notable exception of apples and raw tree nuts. In addition to high tariffs, time-consuming and cumbersome licensing and registration procedures can delay the entry of new products into the Thai domestic market. Investment Restrictions: Thailand’s agreement with the IMF contains a commitment to accelerate privatization of state holdings in the areas of energy, public utilities, telecommunications, and transportation. Progress in this regard has been slow, but the Thai parliament has recently passed significant bankruptcy, foreclosure, and privatization laws that are aimed at expediting the privatization process. This, in turn, is expected to increase opportunities for U.S. investors to gain market access to those service sectors. Under the 1966 Treaty of Amity and Economic Relations, with the exception of a few sectors, the United States is exempted from restrictions on foreign equity investment in Thailand. However, there are still Thai government restrictions in the communications, transport, and banking sectors; the exploitation of land and natural resources; and the trade of domestic agricultural products. U.S. countervailing duty (CVD) laws and the WTO Agreement on Subsidies and Countervailing Measures provide redress mechanisms against the adverse effects of subsidization. U.S. companies may file CVD petitions directly with the Commerce Department. Commerce and the International Trade Commission (ITC) separately determine if the subsidies are countervailable and have harmed U.S. industry. To obtain redress through the WTO’s subsidies agreement, a U.S. firm informally brings its concerns to the U.S. government, which investigates the matter and then, if warranted, raises the issue in the appropriate WTO forum. We reviewed the export policies of four current IMF borrowers: Brazil, Indonesia, Korea, and Thailand. Since 1996, the United States has formally invoked the WTO’s dispute settlement procedures over a number of Brazilian, Indonesian, and Korean subsidies and has found subsidies in Brazil, Korea, and Thailand to be countervailable under U.S. trade law. For example, the United States invoked dispute settlement procedures against Korean subsidies to its beef industry and a Brazilian subsidy to its auto industry, and determined that both countries were providing countervailable subsidies to their steel industries. Among other actions, the United States invoked WTO dispute settlement procedures against Indonesia’s automotive subsidies and determined a variety of Thai subsidies to be countervailable. Under the Tariff Act of 1930, as amended, U.S. firms that are materially injured by foreign subsidized goods in the U.S. market can obtain relief from certain actionable subsidies by seeking to have countervailing duties levied on the subsidized imported goods. CVD laws are administered jointly by the Department of Commerce and the ITC. An interested party may file a CVD petition with Commerce alleging that a U.S. industry is materially injured, or is threatened with material injury, by reason of imports that are being subsidized by foreign governments. If the petition demonstrates a reasonable indication that a subsidy exists and is causing material injury, Commerce and the ITC conduct separate but parallel investigations. The Commerce Department determines whether the imported product is being subsidized, either directly or indirectly. An actionable subsidy exists when the foreign firm making or exporting the product (1) receives a “financial contribution” by a government or public body, (2) receives a “benefit” from that contribution, and (3) receives a financial contribution that is “specific” (that is, it is based upon export performance or limited to a certain industry or group of industries). The ITC determines whether a U.S. industry is materially injured, or threatened with material injury, or that the establishment of an industry in the United States is materially retarded, by reason of imported subsidized products. Material injury is defined as a harm that is not inconsequential, immaterial, or unimportant. In determining the threat of material injury, the ITC considers whether the subsidy practice indicates the likelihood of substantially increased imports and whether such an increase would result in material injury to U.S. industry. If the Commerce Department finds an actionable subsidy and the ITC finds material injury, Commerce will then issue a CVD order instructing the U.S. Customs Service to collect additional duties on the imported product in an amount equal to the subsidy margin determined by Commerce in its investigation. While U.S. CVD law addresses foreign subsidized imports in the United States, under the WTO’s Subsidies and Countervailing Measures Agreement, U.S. industries have a redress mechanism against foreign subsidies that affect U.S. business in markets outside the United States, including the subsidizing government’s market. Under the subsidies agreement, a subsidy is defined as a financial contribution that imposes a cost on the government providing it, and confers a benefit to certain enterprises. The subsidy must be causing serious prejudice to a U.S. industry. In 1995, the U.S. Commerce Department created the Subsidies Enforcement Office (SEO) to assist U.S. businesses by monitoring foreign subsidies and identifying subsidies that can be remedied under the WTO’s subsidies agreement when they adversely affect U.S. business interests. One of the focuses of the SEO’s subsidies monitoring program is to ensure compliance with the subsidy-related conditions of the IMF stabilization packages and to uncover subsidy practices that are actionable under the WTO’s subsidies agreement. Unlike U.S. CVD law, a concerned U.S. business does not file a formal petition with the SEO to allege a foreign subsidy in violation of the WTO subsidies agreement. Instead, the SEO receives information concerning foreign subsidy practices through informal contacts with U.S. businesses, trade associations, U.S. embassies, and the SEO’s own monitoring efforts. Once the SEO has evaluated all available information on the particular alleged subsidy, SEO will confer with USTR on how to proceed. In many cases, an effective way to proceed is through informal channels, bilateral meetings, and in WTO subsidies agreement committee meeting discussions. However, formal enforcement mechanisms are also provided for under the WTO subsidies agreement, including dispute settlement action in the WTO. The WTO Committee on Subsidies and Countervailing Measures also provides regular surveillance. In May 1999, the United States participated in review by the committee of the full notifications submitted to the WTO by countries that were due on July 1, 1998. Korea’s notification was among those that were discussed at that review. Over the past 2 years, the United States has posed a series of questions to all four IMF borrower countries in our review regarding their WTO subsidy notifications. In addition, it invited the three Asian borrowers to discuss their IMF financing arrangements at a special meeting held in April 1998. The U.S. government tracks export and domestic policies of various countries for possible subsidization and routinely examines subsidies notified to the WTO for conformity with the subsidies agreement. The SEO also has created a “Subsidies Enforcement Library” that contains such WTO notifications, Federal Register notices associated with past U.S. CVD cases, and other information. Commerce and USTR jointly prepare an annual report to Congress on the WTO subsidies agreement. In addition, USTR, State, and Commerce include export policies in their regular reports on trade barriers. Finally, an interagency task force under the leadership of the U.S. Department of the Treasury is reviewing trade policies of key IMF borrower countries, including export policies. All of these rely heavily on industry to identify and make known potential problems. In February 1999, the United States requested consultations under the WTO dispute settlement mechanism concerning Korean government support to its beef industry. The United States alleged that Korean regulations discriminated against and constrained opportunities for the sale of imported beef in Korea. The United States also alleged that Korea provided domestic support to its cattle industry in amounts that exceeded its WTO tariff reduction schedule. The United States contended that such support amounts to domestic subsidies that contravene the WTO Agreement on Agriculture. A panel was formed to consider the matter on May 26, 1999. Australia also initiated WTO dispute settlement procedures against Korean beef practices in the WTO on April 13, 1999. Also, within the last 5 years the Commerce Department has conducted three CVD investigations, all involving potential Korean government subsidies to its steel industry. Commerce launched the first of these investigations in April 1998 to determine whether Korea was providing countervailable subsidies to certain Korean producers and exporters of stainless steel plate in coils. In its final determination in March 1999, Commerce ruled that the subsidy existed but that it was not countervailable due to its small size. Nevertheless, prior to Korea’s recent IMF financing arrangements, the Commerce Department found certain other Korean subsidy programs to be countervailable. These subsidies involved government-influenced lending, government infrastructure investments at a port facility used predominantly by a state-owned steel company, short-term export financing, tax reserves for export losses, tax reserves for overseas market development, investment tax credits, and electricity discounts from a government-owned power company. In July 1998, the Commerce Department began another investigation to determine whether Korea was providing countervailable subsidies to certain Korean producers and exporters of stainless steel sheet and strip in coils. In its final determination in June 1999, Commerce ruled that such countervailable subsidies were being provided. These subsidies involved government influenced lending; the purchase of one steel company's divisions by another state-owned government-supported infrastructure development at a port facility used predominantly by a state-owned steel company; export industry facility loans; short-term export financing; tax reserves for export losses; tax reserves for overseas market development; investment tax credits; utility rate discounts from the government-owned electricity provider; loans from the National Agricultural Cooperation Federation; and two-tiered pricing structure for domestic customers of one steel company. Finally, in March 1999, the Commerce Department initiated an investigation to determine whether Korea, among other countries, was providing countervailable subsidies to certain manufacturers, producers, or exporters of certain cut-to-length, carbon-quality steel plate. As part of the investigation that was still ongoing as of April 30, 1999, the Commerce Department is reviewing alleged countervailable subsidies involving a two-tiered pricing structure to domestic customers of one steel company; government-directed credit programs; Korea’s Private Capital Investment Act; government-supported infrastructure development at a port facility; certain tax programs and asset revaluation under Korea’s Tax Reduction and Exemption Control Act; special cases of Tax for Balanced Development Among Areas; certain industry promotion and research and development subsidies; Overseas Resource Development loan and grant programs; free trade zones; excessive duty drawbacks; port facility fees; preferential utility rates; a scrap reserve fund; export insurance rates by the Korean Export Insurance Corporation; short-term export financing; Korean Export-Import Bank loans; Export Industry Facility Loans and Special Facility Loans; and loans from the Energy Savings Fund. Since 1996, the United States has participated in two WTO dispute settlement proceedings involving Brazilian subsidies. The United States invoked WTO dispute settlement procedures and held consultations with Brazil regarding various aspects of its automotive regime in August 1996, including provisions in its WTO-notified subsidy program for automobiles. In March 1998, the United States and Brazil signed an agreement settling the dispute. (See app. I for more details.) Japan and the European Union have also invoked WTO dispute settlement procedures in response to various aspects of Brazil’s automotive regime. These consultations were pending as of April 30, 1999. In a second dispute, in June 1996, Canada requested consultations with Brazil regarding its claim that export subsidies granted by PROEX, a Brazilian government export financing program, to foreign purchasers of Brazil’s Embraer aircraft were inconsistent with the WTO’s Agreement on Subsidies and Countervailing Measures. Canada later requested establishment of a WTO dispute settlement panel to review the matter. The United States and the European Union reserved their rights as third parties in the dispute. One of the many U.S. submissions to the panel challenged Brazil’s position that it could provide export subsidies to counter nonexport credit subsidies offered by another WTO member. In April 1999, the dispute settlement panel found that Brazil did not meet the conditions that allow developing nations more time than developed nations to remove prohibited export subsidies, such as PROEX. The panel declared that PROEX’s interest equalization program was a prohibited export subsidy and that it must be withdrawn without delay. In addition to the WTO disputes, the U.S. government has preliminarily determined one Brazilian subsidy to its steel industry to be countervailable. In October 1998, the Commerce Department began investigating whether Brazil was providing countervailable subsidies to manufacturers of certain hot-rolled flat rolled carbon-quality (“hot-rolled”) steel products. In its preliminary decision in February 1999, Commerce ruled that some equity infusions and debt-to-equity conversions provided to several of these manufacturers were countervailable because they were inconsistent with the usual investment practices of private investors. The net subsidy rate for these manufacturers ranged from 6.62 percent to 9.45 percent. The Commerce Department also preliminarily ruled that tax deferrals that were provided to some of the same steel manufacturers were not countervailable because they were not limited to any specific industry. According to USTR, since 1996, Indonesia’s most controversial trade policy has been its efforts to develop an indigenous automotive industry. Two programs were involved. One program, which was begun in 1993 and was to be continued until the year 2000, granted import duty relief to certain automotive parts and accessories for use in assembling or manufacturing motor vehicles based on the percentage of local content in the finished vehicles. The other subsidy related to the 1996 establishment of a national car program. Under this program, Indonesian companies designated as “pioneer firms” were permitted to import tariff-free finished automobiles designated as “national cars,” and to sell the national cars luxury tax free for 3 years. A single Indonesian company was granted pioneer status, and in 1996 it began importing finished national cars from Korea, where they were produced by a company that was jointly owned by the Indonesian company and a Korean firm. In October 1996, 6 months after Indonesia announced the establishment of its national car program, the United States and the European Union initiated WTO dispute settlement procedures against the program and against the other automotive sector subsidy, the local content tariff exemption. After its financial crisis began and while the WTO dispute settlement procedure still was ongoing, Indonesia committed to the IMF to eliminate the national car program by removing its special tax, customs, and credit privileges. In January 1998, while the WTO dispute was ongoing, Indonesia revoked these privileges as a commitment to the IMF. Indonesia also pledged to the IMF to phase out tariff privileges tied to local content levels, although a WTO panel had not reached a final decision. In June 1998, the WTO panel issued a final ruling against Indonesia, and Indonesia was given until July 1999 to eliminate the second subsidy. In January 1999, the Indonesian government announced that it would formulate a new national car policy that would conform to its WTO obligations. In addition to the national car program, during 1997-99 the U.S. government has investigated one other Indonesian export subsidy under U.S. CVD law. In response to a complaint from a U.S. company regarding extruded rubber thread, on March 26, 1999, the Commerce Department found that the Bank of Indonesia’s rediscount export financing program was a subsidy because, during 1997 under the program, “special” exporters received financing at a lower rate than was available to other firms. However, the Commerce Department determined that the subsidy provided to the two Indonesian producers of extruded rubber thread products in question was not countervailable because the subsidy amounted to less than 3 percent of the value of the products. Since 1996, the United States has not formally raised concerns about Thai subsidies in the WTO; however, in the past the U.S. government has found a number of Thai subsidies to be countervailable. Some of these programs were found to be countervailable with regard to certain apparel, steel pipe and tubing, ball bearings, and pocket lighters, but no CVD order was issued with respect to pocket lighters because the ITC did not find material injury to the competing U.S. industry. These programs were found to be countervailable: Export packing credits, which are short-term, preshipment export loans, provided and recorded on a shipment-by-shipment basis, and approved new export packing credit loans totaling $500 million to stimulate export activity in reaction to Thailand’s lagging exports were countervailable. The Commerce Department determined that this program was countervailable in the context of investigations of certain apparel, steel pipe and tubing, and other products. Tax certificates for exporters, which are issued by the Thai government to exporters, and which are transferable, were found to be countervailable; these certificates also rebate indirect taxes and import duties levied on inputs used to produce exports. Tax and duty exemptions that allow exporting companies to import machinery and equipment free of import duties and business and local taxes were countervailable. Income tax exemptions that allow companies to obtain 3 to 8 year exemptions from payment of corporate income tax on profits derived from net profits for losses incurred during the tax exemption period were found to be countervailable. Goodwill and royalties tax exemption status, which is granted to promoted businesses for income arising from goodwill, royalties, and other payments for a period of up to 5 years were countervailable. Tax deductions for dividends that allow promoted businesses receiving tax exemptions to receive an additional deduction from taxable income for dividends received from promoted activities were found to be countervailable. Assistance for trading companies, which the Board of Investments authorized in 1978 to provide certain incentives to eligible trading companies, were countervailable. Duty exemption for raw materials that allows companies to import raw and “essential “ materials used in the production, mixing, and assembly of exports, free of import duties were found to be countervailable. Permission to maintain foreign currency bank accounts, which allows a Thai company to hold a foreign currency account, is countervailable in the event the account is denominated in U.S. dollars. This report (1) identifies the extent to which current International Monetary Fund (IMF) borrower countries restrict international trade and the countries whose trade has the greatest potential to affect the United States; (2) describes in detail the reported trade barriers and export policies of four IMF borrowers that are among those with the greatest capacity to affect the United States—Brazil, Indonesia, the Republic of Korea, and Thailand—and recent actions reported to have been taken to reduce those barriers or modify policies; (3) identifies actions, in the context of their current IMF programs, the four countries have taken or are committed to take to liberalize their trading systems; and (4) determines the extent to which the impact of the four countries’ export policies on the United States can be predicted and measured and which U.S. industry sectors might be affected by changes in trade from these countries. Except where otherwise noted, we included information as of April 30, 1999. We defined IMF borrower countries as those 98 member countries that had IMF credit and loans outstanding in calendar years 1997 or 1998. These 98 countries have used IMF credit at some point during the past 10 years and still have outstanding obligations. To determine the degree to which current IMF borrower countries restrict international trade, we analyzed several indicators of restrictiveness, including average tariff rates; nontariff barriers; and indexes constructed by the IMF, the Heritage Foundation, and the Fraser Institute. The IMF index is composed of three measures: an index of average tariff rates and other duties on imports, an index of nontariff barriers, and an overall index that rates trade restrictiveness on a 10-point scale that weights nontariff barriers heavier than tariff barriers. The overall index classifies countries as either “open” (1 to 4), “moderate” (5 to 7), or “restrictive” (8 to 10). Although these indicators do not comprehensively measure the wide variety of policies that countries may use to restrict trade, they do reflect important barriers and provide information on the relative restrictiveness of countries among each other and over time. We also collected information on borrowers’ tariff levels from other sources. We then compared how the IMF index rated countries to the way the Heritage and Fraser Institute measures did so. We found that the three organizations’ measures rated countries similarly and that the tariff rates used by the three indexes were similar to the tariff rate data we collected independently. Finally, we supplemented this information with information from USTR and the WTO on membership in the WTO, the existence of other multilateral and bilateral trade agreements with the United States, formal market access disputes filed, and types of barriers identified in USTR’s annual National Trade Estimate Report on Foreign Trade Barriers. We selected four of the eight current IMF borrowers for more detailed study—Brazil, Indonesia, Korea, and Thailand. We selected these four countries because, in addition to being important U.S. trading partners, they are among the 10 top current borrowers and currently have IMF financing arrangements. Mexico is the largest U.S. trading partner among current IMF debtors, and Mexico is the fourth largest current IMF debtor. We did not select Mexico for our study, however, because Mexico is not currently in an IMF financing arrangement and thus is not currently eligible to borrow more funds from the IMF, and because U.S.-Mexican trade is governed by the North American Free Trade Agreement. To identify the priority import barriers and export policies of Brazil, Indonesia, Korea, and Thailand, we relied principally on USTR’s most recent three (1997-99) National Trade Estimate Report on Foreign Trade Barriers. These reports identify those foreign import policies and practices that have the greatest potential to affect U.S. exports. We also relied upon USTR’s Trade Policy Agenda and Annual Report of the President of the United States on the Trade Agreements Program. These reports identify the executive branch’s annual trade priorities. We also used recent State Department Country Reports on Economic Policy and Trade Practices. In addition, we interviewed U.S. government officials from USTR, the Department of Commerce, and the Department of State. We reviewed the results of countervailing duty reviews and investigations by the ITC and the Department of Commerce’s International Trade Administration, which were reported in the Federal Register. And we met with officials from the Department of the Treasury and the IMF to discuss import and export policies in the context of each country’s current IMF program. Information on foreign laws and policies does not reflect our independent legal analysis but is based on interviews and secondary sources. To identify and determine the status of trade liberalization measures that Brazil, Indonesia, Korea, and Thailand have undertaken or have committed to undertake within the context of their recent IMF financing arrangements, we defined their “recent” IMF programs as those that started since June 1997 when the Asian financial crisis began in Thailand. Several of the countries technically have had more than one IMF financing arrangement since then because their original programs were expanded. We considered a measure to be trade liberalization in nature if it involved eliminating or lowering either tariffs or nontariff barriers to imports; concerned policies that promote exports, such as subsidies; or involved export restrictions. We reviewed public and nonpublic country and IMF documents, including the countries’ letters of intent and memorandums of economic and financial policies. We also reviewed IMF staff reports on the countries’ progress in attaining the objectives of their financing programs and met with IMF and U.S. government officials. We based our general discussion of the potential impact of export policies on economic literature and reports that explain how the U.S. government analyzes the impact of imports and export policies on trade. We identified the rank of Brazil, Indonesia, Korea, and Thailand as exporters among IMF borrowers by examining data prepared by the IMF. The latest available data cover 1997. We identified the four nations’ ranks as world exporters by examining the WTO’s April 1999 report on world trade in 1998. Exports net of intra-European Union trade were used. We identified the rank of Brazil, Indonesia, Korea, and Thailand as suppliers of specific product groups by examining the U.S. Department of Commerce’s 1999 Industrial and Trade Outlook report and the ITC’s 1998 annual Trade Shifts report. To identify which of the four countries’ export policies might harm U.S. industries, we reviewed the results of countervailing duty reviews and investigations by the ITC and the Department of Commerce’s International Trade Administration, which were reported in the Federal Register. We looked exclusively at subsidies; that is, financial contributions by a government that confer a financial benefit to selected companies, or that are prohibited by WTO agreements. We also relied on the Commerce Department’s Electronic Subsidies Enforcement Library to review countervailing duty cases filed, and spoke with officials from the Department’s Subsidies Enforcement Office to discuss those cases. In addition, we reviewed each of the four countries’ most recent export subsidy notifications to the WTO’s Committee on Subsidies and Countervailing Measures. However, information contained in these notifications was dated. To determine which subsidies were subject to the WTO dispute settlement activity or investigations under U.S. countervailing duty law, we reviewed the most current Overview of the State-of-Play of WTO Disputes in addition to the Commerce Department sources previously cited. Where practicable, we identified overlaps and linkages between the various types of policies and issues, but the available information was not always clear or detailed enough to identify such linkages. We identified products that showed rising imports and falling prices by examining trade data for the years 1997 and 1998. Specifically, we identified product sectors showing large increases in U.S. imports from the partner by analyzing all U.S. imports from these nations at both the 4- and the 10-digit levels of aggregation of the U.S. Harmonized Tariff Classification System. Products that met certain value, market share, and import increase thresholds were analyzed further. First, we netted out import surges that appeared to be coming at the expense of other foreign suppliers, instead of U.S. producers. Second, we determined whether price declines had occurred for the remaining items by calculating unit values of imports at the 10-digit level. The result of this screening process was that 62 4-digit items, amounting to $4.1 billion in imports, showed the specified increases in imports and price declines, as did 300 10-digit harmonized schedule products, accounting for $5.3 billion in imports. In reviewing whether a domestic U.S. industry exists, we examined regular monitoring reports and secured staff-level insights by selected industry experts at the ITC, the U.S. Department of Commerce, the U.S. Department of Agriculture, and other sources. A limitation of this approach is that it is somewhat imprecise and based on at-hand information, which may be limited. However, it was not practicable to use other currently available information on U.S. production because it is dated and did not neatly match the classifications used for trade and tariff analysis. We identified products that were eligible for Generalized System of Preferences treatment by examining codes in the U.S. tariff schedule identifying such treatment. We discussed the leading import surges and price declines identified with staff at the U.S. Department of Commerce and the ITC. We relied upon these informal contacts as well as information on export policies developed in a previous section and information on formal petitions for import relief made under U.S. trade law to identify products where concern exists by U.S. producers about harm from imports and/or unfair trade practices by Brazil, Indonesia, Korea, and Thailand. Such information is instructive but must be recognized as indicative only. Fully identifying and analyzing the factors contributing to rising imports; the nature, extent and impact of competition from imports on U.S. producers; and the extent of export subsidies would require information that is beyond the scope of this report. We performed our work between November 1998 and May 1999 in accordance with generally accepted government auditing practices. In addition to those named above, Kim Frankena, Tim Wedding, Michael Zola, David Artadi, Carlos Evora, and Rona H. Mendelsohn made key contributions to this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO provided information on the International Monetary Fund (IMF), focusing on the: (1) extent to which IMF borrower countries restrict international trade and the borrowers whose trade has the potential to affect the United States; (2) reported trade barriers and export policies of four IMF borrowers that are among those with the greatest capacity to affect the United States--Brazil, Indonesia, the Republic of Korea, and Thailand--and recent actions reported to have been taken to reduce those barriers or modify policies; (3) actions, in the context of their recent IMF financing arrangements, the four countries have taken or are committed to take to liberalize their trading systems; and (4) extent to which the impact of the four countries' export policies on the United States can be predicted and measured and which U.S. industry sectors might be affected by recent changes in trade from these countries. GAO noted that: (1) although the 98 IMF borrowers all restrict trade to some extent, only a few are large enough traders to affect individual sectors of the U.S. economy; (2) according to IMF and other measures of trade restrictiveness, borrowers have generally reduced their tariff and nontariff barriers since 1990; (3) however, according to the IMF measure, about half still maintain moderate to restrictive barriers; (4) GAO studied four countries--Brazil, Indonesia, Korea, and Thailand; (5) in 1998, Thailand had an average tariff rate of about 18 percent, Korea had an average tariff rate of about 8 percent, and Brazil's and Indonesia's rates fell in between; (6) Brazil, Indonesia, Korea, and Thailand have experienced either rising trade surpluses or falling trade deficits with the United States and other countries since their recent financial crises began; (7) countries in an IMF financing arrangement sometimes have liberalized their trade systems within the context of their arrangements; (8) in addition to trade liberalization measures, as part of their IMF programs, Korea, Indonesia, and Thailand have committed to further open their economies to foreign investment and to substantially restructure their financial and corporate sectors; (9) these commitments, if fully implemented, could lead to increased U.S. investment in and trade with these countries; (10) the policies maintained by Brazil, Indonesia, Korea, and Thailand to encourage exports could potentially distort trade and displace production by U.S. producers, even though they may benefit other U.S. companies or consumers; (11) however, the large macroeconomic changes in these countries caused by their recent financial crises greatly complicate predicting and measuring the policies' impact on the United States because the macroeconomic changes have probably been a more important source of recent changes in trade flows; (12) GAO's analysis of 1997-1998 trade data reveals that overall U.S. imports from Brazil, Indonesia, Korea, and Thailand rose moderately in 1998, but by less than U.S. imports from other trading partners; (13) however, products accounting for about 16 percent of the value of U.S. imports from these four IMF borrowers registered large increases and falling U.S. prices during this period; and (14) some of these product sectors, notably steel, have already been subject to petitions by U.S. industry for relief from unfairly traded imports under U.S. trade law, while the executive branch is monitoring imports of others of these products, including semiconductors, chemicals, and paper and paper products.
GAO is currently conducting several reviews related to first responder grants. One of these reviews, to be published within the next few weeks, addresses issues of coordinated planning and the use of federal grant funds for first responders in the National Capitol Region, which encompasses the District of Columbia and 11 surrounding jurisdictions. Another effort is focused on intergovernmental efforts to manage fiscal year 2002 and 2003 grants administered by the Office for Domestic Preparedness (ODP) within the Department of Homeland Security (DHS). Because much of our work in this area is ongoing and our findings remain preliminary, my testimony today will focus principally on the major findings of the reports on preparedness funding issued by the DHS OIG and the House Select Committee, supplemented by some examples from our work in four selected locations in three states. Our analysis focused on three ODP grant programs: the State Domestic Preparedness Grant Program of fiscal year 2002, with $315,440,000 in appropriations, and the fiscal year 2003 State Homeland Security Grant Programs, Parts I and II, with appropriations of $566,295,000 and $1,500,000,000, respectively. The purpose of this work was to document the flow of selected fiscal year 2002 and 2003 grant monies from ODP to local governments and the time required to complete each step in the process. In doing this work, we met with state and local officials in each state and obtained and reviewed federal, state, and local documentation. We did this work between December 2003 and February 2004 in accordance with generally accepted government auditing standards. In recent months, the Conference of Mayors, members of Congress, and others have expressed understandable concerns about delays in the process by which congressional appropriations for first responders reach the local fire fighter, police officer, or other first responder. The reports by DHS OIG and the House Select Committee examined the distribution of homeland security grant funding to states and local governments to understand what obstacles—if any—prevent the expeditious flow of grant funding from the federal government to state and local governments. In March 2003, ODP was moved from the Department of Justice to the DHS. In fiscal years 2002 and 2003, ODP managed about $3.5 billion under 16 separate grant programs. Generally, states and local grant recipients could use these funds for some combination of training, new equipment, exercise planning and execution, general planning efforts, and administration. The largest of these grants were the State Homeland Security Grant Programs and the Urban Area Security Initiative grants. In both grant programs, states may retain 20 percent of total state grant funding but must distribute the remaining 80 percent to local governments within the state. Before discussing some of the issues that have been raised about the distribution of federal grant funds to first responders, I would like briefly to discuss some basic issues associated with using those funds effectively. A key goal of first responder funding should be developing and maintaining the capacity and ability of first responders to respond effectively to and mitigate incidents that require the coordinated actions of first responders. These incidents encompass a wide range of possibilities, including daily auto accidents, truck spills, and fires; major natural disasters such as floods, hurricanes, and earthquakes; or a terrorist attack that involves thousands of injuries. Effectively responding to such incidents requires well-planned, well-coordinated efforts by all participants. Major events, such as natural disasters or terrorist attacks, may require the coordinated response of first responders from multiple jurisdictions within a region, throughout a state or among states. Thus, it follows that developing a coordinated plan for such events should generally involve participants from the multiple jurisdictions that would be involved in responding to the event. However, a major challenge in administering first responder grants is balancing two goals: (1) minimizing the time it takes to distribute grant funds to state and local first responders and (2) ensuring appropriate planning and accountability for effective use of the funds. In fiscal years 2002 and 2003, at least 16 federal grants were available for first responders, each with somewhat different requirements. Previously, we have noted that substantial problems occur when state and local governments attempt to identify, obtain, and use the fragmented grants-in- aid system to meet their needs. Such a proliferation of programs leads to administrative complexities that can confuse state and local grant recipients. Congress is aware of the challenges facing grantees and is considering several bills that would restructure first responder grants. Much of the concern about delays in distributing federal grant funds to local first responders has involved the State Homeland Security grants which are distributed to states on the basis of a formula. Each state received 0.75 percent of the total grant appropriation, with the remaining funds distributed according to state population. There are a number of sequential steps common to the distribution of ODP State Homeland Security Grants from ODP to the states and from the states to local governments. They include the following: 1. Congress appropriates funds. 2. ODP issues grant guidance to states. 3. State submits application, including spending plans, to ODP. 4. ODP makes award to states noting any special conditions that must be cleared before the funds can be used. 5. State meets and ODP lifts special conditions, if applicable. 6. State subgrants at least 80 percent of its funds to local governments. 7. Local governments purchase equipment directly or through the state. 8. Local governments submit receipts to the state for reimbursement. 9. State draws down grant funds to reimburse local governments. The total time required to complete these steps is dependent upon ODP requirements and state and local laws, requirements, regulations, and procedures. Generally, the DHS OIG report and the report of the House Select Committee on Homeland Security found similar causes of delays in getting funds to local governments and first responder agencies. These included delays in completing state and local planning requirements and budgets; legal requirements for the procedures to be used by local governments in accepting state grant allocations; the need to establish procedures for the use of the funds, such as authority to buy equipment and receive reimbursement later; and procurement requirements, such as bidding procedures. Generally, neither the IG report nor the House Select Committee report found that the delays were principally due to ODP’s grant management procedures and processes. Both the DHS OIG report and the House Select Committee report found that ODP’s grant applicant process was not a major factor in delaying the distribution of funds to states. The DHS OIG found that in fiscal years 2002 and 2003, ODP reduced the time it took to make on-line grant application guidance and applications available to states, process grant applications, and award the grants to states after applications were submitted. The DHS OIG found that the total number of days from grant legislation enacted to award of grants to states declined from on average 292 days for fiscal year 2002 grants to on average 77 days for fiscal year 2003 grants. For the three states we examined, we found that the time between the enactment of the appropriation and ODP’s award of the grant to these states declined from 8 months in fiscal year 2002 to 3 months for fiscal year 2003 State Homeland Security Grant Program, Part I, and 2 months for fiscal year 2003 State Homeland Security Grant Program, Part II. One factor that did delay the states’ ability to use ODP grant funds was the imposition of special conditions. In fiscal years 2002 and 2003, ODP imposed special conditions for the state homeland security formula grants if the state had failed to adequately complete one of the requirements of the grant application. For example, in fiscal years 2002 and 2003, to receive funding states had to submit detailed budget worksheets to identify how grant funds would be used for equipment, training, and exercises. To accelerate the grant distribution process, ODP would award funds to states that had not completed the budget detail worksheets, with the special condition that states and locals would be essentially unable to use the funds until the required budgets were submitted and approved. Thus, the time it took to lift the special conditions was largely dependent upon the time it took state and local governments to submit the required documentation. States could not begin to draw down on the grant funds until the special conditions were met. In one state we reviewed, ODP notified the state of the special conditions on May 28, 2003, and the conditions were removed on August 6, 2003, after the state had met those conditions. In another state, ODP notified the state of the special conditions on September 13, 2002, and the conditions were removed on March 18, 2003. ODP imposed special conditions on both the fiscal year 2002 State Domestic Preparedness Grant Program and the fiscal year 2003 State Homeland Security Grant Program, Part I, but not on the State Homeland Security Grant Program, Part II. After ODP makes its initial award, the state must subgrant at least 80 percent of its grant award to local units of government. In fiscal year 2003, the states had to certify to ODP within 45 days that they had made these subgrants. The subgrant entities and procedures can vary with each state, making it hard to generalize about this phase of the distribution process. In our work, we found that some states subgranted the funds to the county level, while another subgranted to regional task forces composed of several counties. Subgrantees also varied in their procedures to distribute funds to local governments. Some subgrantees managed the grant process themselves, while others chose to pass funds further down, to a county or city within the jurisdictional area. As reported by the DHS OIG, Congress adopted appropriation language for the fiscal year 2003 State Homeland Security Grant, Part II, that required states to transfer 80 percent of first responder grant funds to local jurisdictions within 45 days of the funds being awarded by ODP. This requirement was included in the appropriation bill to ensure that states pass funds down to local jurisdictions quickly, and ODP incorporated this requirement into its grant application guidance. However, according to the DHS OIG report, this action had a limited effect because most states met the 45-day deadline by counting funds as transferred when the states agreed to allocate a specific amount of the grant to a local jurisdiction, even if the state had not determined how the funds would be spent or when contracts for goods and services would be let. Additionally, many states and local jurisdictions delayed spending of prior year grant funds in order to meet the fiscal year 2003 requirement. The House Select Committee staff also reported that nearly all states met this 45-day requirement with respect to 2003 funding as of February 2004, but noted that this may not reflect the actual availability of funds for expenditures by local jurisdictions. The committee report cited the example of Seattle, Washington. While it had been awarded $30 million in May 2003, Seattle received authorization to spend these funds only shortly before the April 2004 release of the committee’s report. In the three states we examined, we also found that states had certified they had allocated funds to local jurisdictions within the required 45-day period. According to the DHS OIG, state and local governments were sometimes responsible for delaying the delivery of fiscal year 2002 grant funds to first responders because various governing and political bodies within the states and local jurisdictions had to approve and accept the grant funds. Six out of the 10 states included in the DHS OIG’s sample reported that their own state’s review and approval process was one of the top three reasons that the funds had not been spent by the time the report was published. For example, one of three states for which data were available took 22 days to accept ODP’s award and 51 days to award a subgrant to one of its local jurisdictions; the local jurisdiction did not accept the grant for another 92 days. Another state took 25 days to accept ODP’s grant award and up to 161 days to award the funds to its local jurisdictions. Local jurisdictions then took up to 50 days to accept the awards. Our work showed similar results. One city was notified on July 17, 2003 that grant funds were available for use, but the city council did not vote to accept the funds until November 7, 2003. The House Select Committee reported that, in over half of the states they reviewed, local jurisdictions had not submitted detailed spending plans to the states prior to the time the states had transferred grant funds to them. Specifically, they found that often times, even though a reasonable estimate of the available award amount was available months earlier, many local jurisdictions waited to initiate their planning efforts until they were officially notified of their grant awards. Because ODP imposed special conditions in some grant years, these local jurisdictions, therefore, could not begin to draw down funds until they provided the detailed budget documentation, outlining how the funds would be spent, as required by ODP. For the fiscal year 2002 statewide homeland security grants, local jurisdictions and state agencies were required to prepare, submit, and receive approval of detailed budget work sheets that specifically accounted for all grant funds provided. This specific detailing of items included not only individual equipment items traditionally accounted for as long-term capital equipment, but also all other items ordinarily recorded in accounting records as consumable items, such as disposable plastic gloves, that usually need not be accounted for individually. Preparing this detailed budget information took time on the part of local jurisdictions to prepare and for the states and ODP to review and approve. Since the first round of fiscal year 2003 state homeland security grants, ODP has not linked the submission and approval of detailed budget information to the release of grant funds. ODP required the submission and approval of the same detailed budget worksheets for the fiscal year 2003 statewide grants, but did not condition the release of funds on their submission and approval. For the fiscal year 2004 grants, ODP still requires the submission of detailed budget work sheets by local jurisdictions to the state, but not to ODP, for its approval. The DHS OIG also found that there were numerous reasons for delays in spending grant funds. Some were unavoidable and others they found to be remediable. In general, the DHS OIG found identifying the highest priority for spending grant funds to be a difficult task. Most states the DHS OIG visited were not satisfied with the needs analysis that they had done prior to September 11, 2001. Some states took the time to update their homeland security strategies, and one state delayed fiscal year 2002 grant spending until it had completed a new strategy using ODP’s fiscal year 2003 needs assessment tool. The DHS OIG also found little consistency in how the states manage the grant process. The states used various methods for identifying and prioritizing needs and allocating grant funds. States may rely on the work of regional task forces, statewide committees, county governments, mutual aid groups, or local fire and police organizations to identify and prioritize grant spending. Both the DHS OIG report and House Select Committee report noted that state and local procurements have, in some cases, been affected by delays resulting from specific procurement requirements. Some states purchase equipment centrally for all jurisdictions, while others sub-grant funds to local jurisdictions that make their own purchases. In these latter instances, local procurement regulations can affect the issuance of equipment purchase orders. The House Select Committee report discussed how state and local procurement processes and regulations could slow the expenditure of grant funds. For example, in Kentucky, an effort was taken to organize bidding processes for localities and to provide them with pre- approved equipment and services lists. However, state and local laws require competitive bidding for any purchases above $20,000, a requirement that can delay actual procurements. Moreover, if bids had been requested for a proposal and those bid specifications were not met, then the bidding process must start over again. As Kentucky’s Emergency Managing Director explained, “There is a process and procedure that must be gone through before localities can actually spend the funds, and the state has not identified funds that are exempt from these local rules of procedure that are in place.” In one of the jurisdictions for which we obtained documentation, we also found that procurement regulations may require that funds be available prior to the issuance of equipment purchase orders. This requirement took from June 18, 2003 to September 4, 2003 before purchase orders could be issued. In the individual jurisdictions in the three states for which we obtained documentation we also found that some apparent delays in obligating grant funds resulted from the time normally required by local jurisdictions to purchase and contract for items, to prepare requests for proposals, evaluate them once received, and have purchase orders approved by legal departments and governing councils and boards. In one case, the time between the city controller’s release of funds to the issuance of the first purchase order was about 3 months, from September 4, 2003, to December 15, 2003. The reports by the DHS OIG and by the Select Committee, as well as the preliminary work we have undertaken, support the conclusion that local first responders may not have anticipated the natural delays that should have been expected in the complex process of distributing dramatically increased funding through multiple governmental levels while maintaining procedures to ensure proper standards of accountability at each level. The evidence available suggests that the process is becoming more efficient and that all levels of government are discovering and institutionalizing ways to streamline the grant distribution system. These increased efficiencies, however, will not continue to occur unless federal, state, and local government each continue to examine their processes for ways to expedite funding for the equipment and training needed by the nation’s first responders. At the same time, it is important that the quest for speed in distributing funds does not hamper the planning and accountability needed to ensure that the funds are spent on the basis of a comprehensive, well-coordinated plan to provide first responders with the equipment, skills, and training needed to be able to respond quickly and effectively to a range of emergencies, including, where appropriate, major natural disasters and terrorist attacks. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the subcommittee may have. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The terrorist attacks of September 11, 2001, highlighted the critical role first responders play at the state and local level when a disaster or emergency strikes. In fiscal years 2002 and 2003, Congress appropriated approximately $13.9 billion for domestic preparedness. A large portion of these funds were for the nation's first responders to enhance their ability to address future emergencies, including potential terrorist attacks. These funds are primarily to assist with planning, equipment purchases, training and exercises, and administrative costs. They are available to first responders mainly through the State Homeland Security Grant Programs and Urban Area Security Initiative grants. Both programs are administered through the Department of Homeland Security's Office for Domestic Preparedness. In this testimony, GAO addressed the need to balance expeditious distribution of first responder funds to states and localities with accountability for effective use of those funds and summarized major findings related to funding distribution delays and delays involving funds received by local governments, as presented in reports issued by the Department of Homeland Security Office of Inspector General and the House Select Committee on Homeland Security. The testimony incorporated supporting evidence on first-responder funding issues based on ongoing GAO work in selected states. The reports of the Department of Homeland Security Office of Inspector General (OIG) and the House Select Committee on Homeland Security examined the distribution of funds to states and localities. Both reports found that although there have been delays in getting federal first-responder funds to local governments and first-responder agencies, the grant management requirements, procedures, and processes of the Office for Domestic Preparedness (ODP) were not the principal cause. According to the OIG's report, in fiscal years 2002 and 2003, ODP reduced the time required to provide on-line grant application guidance to states, process grant applications, and make grant awards. For example, for fiscal year 2002 grants, it took 292 days, on average, from the time the grant legislation was enacted to the awarding of grants to states. For fiscal year 2003 grants, the total cycle was reduced to 77 days, on average. According to the reports, most states met deadlines for subgranting first-responder funds to local jurisdictions. The fiscal year 2003 State Homeland Security Grant Programs and Urban Area Security Initiative required states to transfer 80 percent of first-responder grant funds to local jurisdictions within 45 days of the funds being awarded by ODP. Most states met that deadline by counting funds as transferred when states agreed to allocate a specific amount of the grant to a local jurisdiction, the OIG's report found. The House Select Committee staff concurred. And in the three states GAO examined, states certified they had allocated funds to local jurisdictions within the 45-day period. Delays in allocating grant funds to first responder agencies are frequently due to local legal and procedural requirements, the OIG's report found. State and local governments sometimes delayed delivery of fiscal year 2002 grant funds, for example, because governing and political bodies within the states and local jurisdictions had to approve and accept the grant funds. GAO's work indicated a similar finding. In one state GAO reviewed, roughly four months elapsed from the date the city was notified that grant funds were available to the date when the city council voted to accept the funds. Both reports GAO reviewed found that state and local procurement processes have, in some cases, been affected by delays resulting from specific procurement requirements. While some states purchase first-responder equipment centrally for all jurisdictions, in some instances, those purchases are made locally and procurement may be delayed by competitive bidding rules, among other things. It is important to note that those who manage homeland security grants to states and local governments must balance two sometimes competing goals: (1) getting funds to states and localities expeditiously and (2) assuring that there is appropriate planning and accountability for the effective use of the funds.
ATF is the chief enforcer of explosives laws and regulations in the United States and is responsible for licensing and regulating explosives manufacturers, importers, dealers, and users. ATF is also responsible for regulating most, but not all, explosives storage facilities. Under federal explosives regulations, a license is required for persons who manufacture, import, or deal in explosives and, with some exceptions, for persons who intend to acquire explosives for use. No license is required solely to operate an explosives storage facility. State and local government agencies are not required to obtain an explosives license to use and store explosives. However, all persons who store explosive materials (including state and local entities) must conform with applicable ATF storage regulations, irrespective of whether they are required to obtain an explosives license for other purposes. According to ATF data, as of February 2005 there were 12,028 federal explosives licensees in the United States. Roughly 7,500 of these had some kind of explosives storage facility, consisting of 22,791 permanent or mobile storage magazines. ATF storage regulations include requirements relating to the safety and security of explosives storage magazines—that is, any building or structure (other than an explosives manufacturing building) used for storage of explosive materials. Regarding safety, the storage regulations include requirements related to location, construction, capacity, housekeeping, interior lighting, and magazine repairs, as well as a requirement that the local fire safety authority be notified of the location of each storage magazine. Regarding security, the ATF storage regulations include the following requirements: Explosives handling. All explosive materials must be kept in locked magazines unless they are in the process of manufacture, being physically handled in the operating process of a licensee or user, being used, or being transported to a place of storage or use. Explosives are not to be left unattended when in portable storage magazines. Magazine construction. Storage magazines must be theft-resistant and must meet specific requirements dealing with such things as mobility, exterior construction, door hinges and hasps, and locks. Magazine inspection. Storage magazines must be inspected at least every 7 days. This inspection need not be an inventory, but it must be sufficient to determine if there has been an unauthorized entry or attempted entry into the magazines, or unauthorized removal of the magazine contents. Magazine inventory. Within the magazine, containers of explosive materials are to be stored so that marks are visible. Stocks of explosive materials are to be stored so they can be easily counted and checked. Notwithstanding the security requirements described above, ATF storage regulations do not require explosives storage facilities to have any of the following physical security features—fences, restricted property access, exterior lighting, alarm systems, or electronic surveillance. Also, while ATF licensing regulations require explosives licensees to conduct a physical inventory at least annually, there is no similar inventory requirement in the storage regulations applicable to other persons who store explosives. According to ATF data, the number of reported state and local government thefts is relatively small when compared with the total number of thefts that have occurred nationwide. During a recent 3-year period (January 2002—February 2005), ATF received reports of 205 explosives thefts from all sources nationwide. By comparison, during this same period, only 9 thefts were reported that involved state and local government storage facilities—5 involving state and local law enforcement agencies, 3 involving state government entities (all universities), and 1 involving a county highway department. The amounts of explosives reported stolen or missing from state and local government facilities are relatively small when compared with the total amounts of stolen and missing explosives nationwide. During a recent 10-month period for which data were available (March 2003 through December 2003), there were a total of 76 theft incidents nationwide reported to ATF, amounting to a loss of about 3,600 pounds of high explosives, 3,100 pounds of blasting agents, 1,400 detonators, and 2,400 feet of detonating cord and safety fuse. By comparison, over an entire 10-year period (January 1995 through December 2004), ATF received only 14 reports of theft from state and local law enforcement storage magazines. Reported losses in these cases were about 1,000 pounds of explosive materials, and in 10 of the incidents less than 50 pounds of explosives was reported stolen or missing. While the ATF theft data indicate that thefts from state and local facilities make up only a small part of the overall thefts nationwide, these reports could be understated by an unknown amount. There are two federal reporting requirements relating to the theft of explosives. One is specific to all federal explosives licensees (and permittees) and requires any theft or loss of explosives to be reported to ATF within 24 hours of discovery. The second reporting requirement generally requires any other “person” who has knowledge of the theft or loss of any explosive materials from his stock to report to ATF within 24 hours. Although the term “person” as defined in law and regulation does not specifically include state and local government agencies, ATF has historically interpreted this requirement as applying to nonlicensed state and local government explosives storage facilities. However, ATF officials acknowledged that some state and local government entities could be unsure as to their coverage under the theft reporting requirements and, as a result, may not know they are required to report such incidents to ATF. Indeed, during our site visits and other state and local contacts, we identified five state and local government entities that had previously experienced a theft or reported missing explosives— two involving local law enforcement agencies, two involving state universities, and one involving a state department of transportation. However, one of these five incidents did not appear in ATF’s nationwide database of reported thefts and missing explosives. Based on these findings, the actual number of thefts occurring at state and local government storage facilities nationwide could be more than the number identified by ATF data. There is no ATF oversight mechanism in place to ensure that state and local government facilities comply with federal explosives regulations. With respect to private sector entities, ATF’s authority to oversee and inspect explosives storage facilities is primarily a function of its licensing process. However, state and local government entities are not required to obtain a federal license to use and store explosives. In addition, ATF has no specific statutory authority to conduct regulatory inspections at state and local government storage facilities. Under certain circumstances, ATF may inspect these facilities—for example, voluntary inspections when requested by a state and local entity, and mandatory annual inspections at locations where ATF shares space inside a state and local storage magazine. Regarding those state and local government facilities that ATF does not inspect, ATF officials acknowledged they had no way of knowing the extent to which these facilities are complying with federal explosives regulations. ATF officials stated that if the agency were to be required to conduct mandatory inspections at all state and local government storage facilities, they would likely need additional resources to conduct these inspections because they are already challenged to keep up with inspections that are mandated as part of the explosive licensing requirements. Under provisions of the Safe Explosives Act, ATF is generally required to physically inspect a license applicant’s storage facility prior to issuing a federal explosives license—which effectively means at least one inspection every 3 years. At the same time, however, ATF inspectors are also responsible for conducting inspections of federal firearms licensees. The Department of Justice Inspector General reported that ATF has had to divert resources from firearms inspections to conduct explosives storage facility inspections required under the Safe Explosives Act. Despite recent funding increases for ATF’s explosives program, giving ATF additional responsibility to oversee and inspect state and local government storage facilities could further tax the agency’s inspection resources. According to ATF officials, because inspection of explosives licensees is legislatively mandated, the effect of additional state and local government explosives responsibilities (without related increases in inspector resources) could be to reduce the number of firearms inspections that ATF would be able to conduct. ATF does not collect nationwide information on the number and location of state and local government explosives storage facilities, nor does the agency know the types and amounts of explosives being stored in these facilities. Since data collection is a function of the licensing process and state and local facilities are not required to be licensed, no systematic information about these facilities is collected. With respect to private sector licensees, ATF collects descriptive information concerning explosive storage facilities as part of the licensing process. ATF license application forms require applicants to submit information about their storage capabilities, including specific information about the type of storage magazine, the location of the magazine, the type of security in place, the capacity of the magazine, and the class of explosives that will be stored. ATF also collects information about licensed private sector storage facilities during mandatory inspections, through examination of explosives inventory and sales records and verification that storage facilities meet the standards of public safety and security as prescribed in the regulations. During the course of our audit work, we compiled some data on state and local government entities that used and stored explosives. At the 13 state and local law enforcement bomb squads we visited, there were 16 storage facilities and 30 storage magazines. According to Federal Bureau of Investigation data, there are 452 state and local law enforcement bomb squads nationwide. However, because of the limited nature of our fieldwork, we cannot estimate the total number of storage facilities or magazines that might exist at other bomb squad locations. Moreover, other state and local government entities (such as public universities and state and local departments of transportation) in addition to law enforcement bomb squads also have explosives storage facilities. At the one public university we visited, there were 2 storage facilities and 4 storage magazines. Again, however, because of the limited nature of our fieldwork, we cannot estimate the total number of storage facilities and magazines that exist at these other state and local government entities nationwide. We found that security measures varied at the 14 state and local government entities we visited. Overall, we visited, 2 state bomb squads, 11 city or county bomb squads (including police departments and sheriffs’ offices), and 1 public university. Four of the 14 state and local entities had 2 separate storage areas, resulting in a total of 18 explosives storage facilities among the 14 entities. Three of these storage facilities were located on state property, 7 were located at city or county police training facilities, 7 were located on other city or county property, and 1 was located at a metropolitan airport. Eleven of the 18 explosives storage facilities we visited contained multiple magazines for the storage of explosives. As a result, these 18 facilities comprised a total of 34 storage magazines. All of the 18 facilities contained a variety of high explosives, including C-4 plastic explosive, detonator cord, TNT, binary (two-part) explosives, and detonators. Estimates of the amount of explosives being stored ranged from 10 to 1,000 pounds, with the majority of the entities (9) indicating they stored 200 pounds or less. At each of the 14 state and local storage entities we visited, we observed the types of security measures in place at their explosives storage facilities. Our criteria for identifying the type of security measures in place included existing federal explosives storage laws and regulations (27 C.F.R., Part 555, Subpart K) and security guidelines issued by the explosives industry (the Institute of Makers of Explosives). Most of these security measures (fencing, vehicle barriers, and electronic surveillance, for example) are not currently required under federal storage regulations. However, we are presenting this information in order to demonstrate the wide range of security measures actually in place at the time of our visits. Physical security. Thirteen of the 18 storage facilities restricted vehicle access to the facility grounds by way of a locked exterior security gate or (in one case) by virtue of being located indoors. Five of the 13 facilities restricted vehicle access after normal working hours (nights or nights and weekends). Officials at 7 other facilities said that vehicle access to the facilities was restricted at all times, including the 1 indoor facility that was located in the basement of a municipal building. Six of the 18 storage facilities had an interior barrier—consisting of a chain-link fence with a locked gate—immediately surrounding their storage magazines to prevent direct access by persons on foot. One other facility (the indoor basement facility), relied on multiple locked doors to prevent access by unauthorized personnel. Conversely, at 1 facility we visited, the storage magazine could be reached on foot or by vehicle at any time because it did not have fencing or vehicle barriers to deter unauthorized access. In addition to restricted access to storage facilities, officials at all of the 18 storage facilities we visited told us that official personnel—either bomb squad or other police officers—patrolled or inspected the storage facility on a regular basis. And, at 9 of the 18 storage facilities we visited, officials said that state or local government employees—police training personnel, jail or correctional personnel, or other city/county employees—maintained a 24-hour presence at the facilities. Electronic security. Four of the 18 explosives storage facilities had either an alarm or video monitoring system in place. Two storage facilities with video surveillance took advantage of existing monitoring systems already in place at their storage locations—one located at a county correctional facility and one located inside a municipal/police building. Officials at 4 storage facilities told us they had alarm systems planned (funding not yet approved), and officials at 3 facilities said they had alarm systems pending (funding approved and awaiting installation). Officials at 2 facilities also told us they planned to install video monitoring. Regarding the feasibility of installing electronic monitoring systems, 4 officials noted that storage facilities are often located in remote areas without easy access to electricity. Regarding the possibility of new federal regulations that would require electronic security at storage magazines, 9 officials told us they would not object as long as it did not create an undue financial burden. Inventory and oversight issues. Officials at all 14 of the entities we visited told us they performed periodic inventories of the contents of their explosives storage magazines in order to reconcile the contents with inventory records. In addition, 9 entities said they had received inspections of their storage facilities, primarily by ATF. Six entities told us they received the inspections on a periodic basis, with another 3 entities having received a onetime inspection. Regarding oversight by multiple regulatory authorities, one entity had been inspected by both ATF and a local government authority, while another entity was inspected on a recurring basis by both ATF and a state government authority. Five of the 14 entities we visited told us they were required to obtain a license from state regulatory authorities to operate their explosives storage facilities. One of these entities was also required (by the state regulatory authority) to obtain a federal explosives license issued by ATF. Officials at 13 entities we visited said they did not object to the possibility of federal licensing or inspection of their explosives storage facilities. Officials at 3 state and local entities noted that additional federal oversight was not a concern as long as they were not held to a higher standard of security and safety than ATF requires of private industry. Thefts and compliance issues. Two of the five thefts we documented during our site visits and other state and local contacts occurred at state and local entities we visited. At one storage facility, officials told us that criminals had once used a cutting torch to illegally gain entry to an explosives storage magazine. At another storage facility, officials said that an unauthorized individual had obtained keys to a storage magazine and taken some of the explosives. In both incidents, the perpetrators were apprehended and the explosives recovered. However, one of these incidents did not appear in ATF’s nationwide database of reported thefts and missing explosives. We also observed storage practices at four facilities that may not be in compliance with federal explosives regulations. However, these circumstances appeared to be related to storage safety issues, rather than storage security. In April 2005, the National Bomb Squad Commanders Advisory Board—which represents more than 450 law enforcement bomb squads nationwide—initiated a program encouraging bomb squads to request a voluntary ATF inspection, maintain an accurate explosives inventory, and assess the adequacy of security at their explosive storage facilities to determine if additional measures might be required (such as video monitoring, fencing, and alarms). This is a voluntary program and it is too soon to tell what effect, if any, it will have towards enhancing security at state and local law enforcement storage facilities and reducing the potential for thefts. The overall number of state and local government explosives storage facilities, the types of explosives being stored, and the number of storage magazines associated with these facilities are currently not known by ATF. ATF has no authority to oversee state and local government storage facilities as part of the federal licensing process, nor does it have specific statutory authority to conduct regulatory inspections of these facilities. As a result, ATF’s ability to monitor the potential vulnerability of these facilities to theft or assess the extent to which these facilities are in compliance with federal explosives storage regulations is limited. According to ATF’s interpretation of federal explosives laws and regulations, state and local government agencies—including law enforcement bomb squads and public universities—are required to report incidents of theft or missing explosives to ATF within 24 hours of an occurrence. Because this reporting requirement applies to any “person” who has knowledge of a theft from his stock and the definition of “person” does not specifically include state and local government agencies, ATF officials acknowledged that these entities may be unsure as to whether they are required to report under this requirement. If state and local government entities are unsure about whether they are required to report thefts and missing explosives, ATF’s ability to monitor these incidents and take appropriate investigative action may be compromised by a potential lack of information. Further, the size of the theft problem, and thus the risk, at state and local government storage facilities will remain unclear. To allow ATF to better monitor and respond to incidents of missing or stolen explosives, the report we are releasing at this hearing recommends that the Attorney General direct the ATF Director to clarify the explosives incident reporting regulations to help ensure that all persons and entities who store explosives, including state and local government agencies, understand their obligation to report all thefts or missing explosives to ATF within 24 hours of an occurrence. The Department of Justice agreed with our recommendation and said it would take steps to implement it. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or members of the subcommittee may have. For information about this testimony, please contact Laurie E. Ekstrand, Director, Homeland Security and Justice Issues, at (202) 512-8777, or EkstrandL@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Other individuals making key contributions to this testimony include William Crocker, Assistant Director; Philip Caramia; and Michael Harmond. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
More than 5.5 billion pounds of explosives are used each year in the United States by private sector companies and government entities. The Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) has authority to regulate explosives and to license privately owned explosives storage facilities. After the July 2004 theft of several hundred pounds of explosives from a local government storage facility, concerns arose about vulnerability to theft. This testimony provides information about (1) the extent of explosives thefts from state and local government facilities, (2) ATF's authority to regulate and oversee state and local government storage facilities, and (3) security measures in place at selected state and local government storage facilities. This information is based on a report GAO is releasing today on these issues. Judging from available ATF data, there have been few thefts of explosives from state and local government storage facilities. From January 2002 to February 2005, ATF received 9 reports of thefts or missing explosives from state and local facilities, compared with a total of 205 explosives thefts reported from all sources nationwide during this same period. During the course of the audit, GAO found evidence of 5 thefts from state and local government facilities, 1 of which did not appear in ATF's national database of thefts and missing explosives. Thus, the actual number of thefts occurring at state and local facilities could be higher than that identified by ATF data. ATF has no authority to oversee or inspect state and local government explosives storage facilities. State and local agencies are not required to obtain a license from ATF to use and store explosives, and only licensees--such as private sector explosives storage facilities--are subject to mandatory oversight. Thus, ATF has no means to ensure that state and local facilities comply with federal regulations. Further, ATF does not collect nationwide information on the number and location of state and local storage facilities, nor does the agency know the types and amounts of explosives being stored in these facilities. Because this data collection is a function of the licensing process and state and local facilities are not required to be licensed, no systematic information about these facilities is collected. By comparison, all licensed private sector facilities must submit a variety of information about their facilities--including location and security measures in place--to ATF during the licensing process. ATF also collects information about these facilities during mandatory inspections. At the 18 state and local government storage facilities GAO visited, a variety of security measures were in place, including locked gates, fencing, patrols, and in some cases electronic surveillance. All the facilities' officials told GAO that they conducted routine inventories. But most of the state and local government entities GAO visited were not required to be licensed or inspected by state or local regulatory agencies. GAO identified several instances of possible noncompliance with federal regulations, but these were related primarily to storage safety issues rather than security.
GPRAMA requires OMB to make publicly available, on a central government-wide website, a list of all federal programs identified by agencies. For each program, each agency is to provide to OMB for publication an identification of how the agency defines the term “program,” consistent with OMB guidance, including program activities that were aggregated, disaggregated, or consolidated to be considered a program by the agency; a description of the purposes of the program and how the program contributes to the agency’s mission and goals; and an identification of funding for the current fiscal year and the previous 2 fiscal years. In addition, GPRAMA requires OMB to issue guidance to ensure that the information provided on the website presents a coherent picture of all federal programs. In August 2012, OMB issued guidance for implementation of the inventory requirements through a phased approach for the 24 agencies subject to the CFO Act. OMB subsequently published 24 separate inventory documents on Performance.gov in May 2013, wherein agencies were to select an approach for identifying programs and provide funding and performance information for the programs identified. For the second phase, originally planned for publication in May 2014, the 24 agencies were to update their inventories based on any stakeholder feedback they received and provide additional program-level funding and performance information. OMB’s guidance also stated that, at that time, the inventory information was to be presented in a more dynamic, web-based approach. However, agencies did not publish updated inventories in May 2014. In October 2014, we reported that according to OMB officials plans for updating the inventories were on indefinite hold as OMB re-evaluated next steps for what type of information would be presented in the inventories and how it would be presented. OMB staff were considering how implementation of the expanded reporting requirements for federal spending information under the DATA Act could be tied to the program inventories. As of July 2017, OMB had not provided a timeline or plan for the next iteration of the federal program inventory. In our 2014 assessment of the executive branch’s initial effort to develop a program inventory, we found that the usefulness of the 24 agency inventories was limited. Agencies had the flexibility to identify their programs using different approaches within the broad definition of what constitutes a program, which—while potentially appropriate for individual agencies—limited the comparability of information across the inventory. Further, we found that the agencies did not work together or consult with stakeholders. We also found that none of the agencies provided the necessary budget and performance information. Without performance information, it was unclear how programs supported various agency goals. We also determined that for the federal program inventory to be useful it must be accurate, complete, consistent, reliable, and valid, among other factors. We recommended a number of specific steps OMB and agencies could take to ensure the inventories are more useful to decisions makers, including providing complete performance information (including performance goals), consulting with stakeholders, and ensuring that information in the inventory is comparable within and across agencies. As mentioned above, OMB staff generally agreed with these recommendations, although they did not comment on three of our recommendations related to including tax expenditures and additional performance information. The principles and practices of information architecture—a discipline focused on how information is organized, structured, and presented to users—may offer an overarching approach for developing a useful federal program inventory. There are three key concepts in information architecture that are relevant to the development of a federal program inventory—facet, controlled vocabulary, and taxonomy. Table 1 defines these terms and provides examples of what they mean within the context of a federal program inventory. Decision rules provide consistency in how programs are included, in the application of the controlled vocabulary, and in the collection of program information in facets. Information architecture can be visualized as a process to identify and define needed information, develop a structure for organizing and presenting it, and ensure that standards are met and maintained. These steps may not be purely sequential, but may be iterative as the inventory is developed, evaluated, and maintained. Based on the principles of information architecture, figure 1 provides a conceptual overview of this potential process for developing a federal program inventory. Each of these steps is described more fully in the sections following. As program information in the inventory is collected and organized into facets, it can be aggregated or disaggregated to facilitate various uses. Facets—and the information or data collected within them—can be structured to allow for searching, grouping, or other functions. Individual facets could describe program characteristics or operations or could relate to budgeting or performance information, among other things. Within these facets, specific information or data would be reported such as program type, specific agency or office names, or budget data. By organizing information according to facets, programs can be identified, grouped, or organized based on certain characteristics, such as the information or data collected within the facet. For example, if a program facet on beneficiaries existed, then potentially all programs that serve the same types of beneficiaries could be identified within the inventory. In the following sections we provide examples of how these principles and practices can be applied to federal programs. However, since we developed these examples for illustrative purposes only, this does not necessarily mean that they should be included in the inventory. The federal program inventory is intended to improve transparency over federal programs. There can be many specific uses for the inventory to support this purpose, and input from stakeholders—such as Congress, agency officials, state and local governments, third-party service providers, and the public—can assist in establishing these potential uses. Early stakeholder involvement can also guide efforts to determine what programs and program information should be included so that the inventory is more likely to meet stakeholder needs over time. We have reported that a federal program inventory including performance information could be used by congressional decision makers to inform decision making to identify issues that the federal government should address, to measure progress, and to identify better strategies, if necessary, among other uses. State officials that we interviewed from three states that have developed or are developing program inventories pointed to similar potential uses. For example, in Vermont a senior performance management official said the state’s inventory could be used to facilitate cross-agency coordination, aid government transitions (such as for newly elected legislators), and link program performance with funding. Likewise, an Arizona official told us the state’s program inventory has the potential to show how agency programs align with performance goals. Michigan officials anticipate that the program inventory currently being developed in that state will have the potential to identify duplication and overlap among state programs. As a result, decision makers in that state will be better equipped to oversee the budget process. Our prior work highlights potential uses for different types of information that could be included in a program inventory. Performance and budgeting information—including, among other types of information, performance goals, targets, and time frames; measures of efficiency; operations, such as activities and services; and costs—could facilitate a variety of potential uses, such as helping decision makers prioritize resources among programs or identifying pressing issues for the government to address; informing congressional decisions about authorizing or reauthorizing federal programs, provisions in the tax code, and other activities; and determining the scope of the federal government’s involvement, investment, and performance in a particular area. Prioritization of some uses may be important to consider to make the inventory more effective. As we previously reported, consulting with stakeholders to understand their needs would better ensure that the information provided in the inventories is useful for stakeholder decision making. Such prioritization, for example, could also involve examining costs that agencies might face in collecting information for certain facets. Then decisions could be made to select only a subset of all potential facets for inclusion in early iterations of the inventory. Tax expenditures are one program type that would need to be included in the program inventory to fully implement GPRAMA. Tax expenditures represent a substantial federal commitment. If the Department of the Treasury’s estimates are summed, an estimated $1.23 trillion in federal revenue was forgone from the 169 tax expenditures reported for fiscal year 2015, an amount comparable to discretionary spending. Tax expenditures are often aimed at policy goals similar to those of federal spending programs. Increased transparency over tax expenditures could help determine how well specific tax expenditures work to achieve their goals and how their benefits and costs compare to those of spending programs with similar goals. In our 2014 review of the executive branch’s initial effort to develop a program inventory, we recommended OMB include tax expenditures as a program type in the federal program inventory and work with the Department of the Treasury to produce an inventory of tax expenditures. As stated previously, OMB neither agreed nor disagreed with those recommendations. Likewise, to enhance usefulness at the federal level, a program inventory can include program operations information, in addition to the budget and performance information required by GPRAMA. Program operations information can include descriptions of what programs do, whom they serve, and the specific activities they conduct. Including this type of information provides a more comprehensive picture of a program within the inventory. There are many potential benefits, including improved ability to identify, assess, and address fragmentation, overlap, and duplication within the federal government. Likewise, program operations information can provide opportunities to enhance service delivery among programs offering similar services or serving related populations. For example, programs serving low income or transportation disadvantaged populations could look for opportunities to facilitate access to related services by coordinating to provide transportation for these beneficiaries. One of the central tasks in creating an inventory of federal programs is to identify the programs to be included and the information to be collected about them. Information architecture practices suggest selecting information sources to compile a list of concepts and terms as part of a controlled vocabulary. For example, stakeholders may frequently use certain terms and concepts to describe programs and make distinctions between different types of programs that can affect the content of the inventory or the information included within it. Thus, grant programs may describe eligible beneficiaries using similar terms, such as rural and urban or youth and elderly. Once the list of concepts and terms has been compiled using a structured process for identifying key terms and concepts, preferences can be selected that best align with meeting user needs to create facets for the inventory. Potential information sources include agency budgets, budget justifications, performance reports, organizational structures, websites, and other internal documentation. Additionally, the facets that will frame information about those programs would need to be identified and defined, with OMB deciding which facets warrant the cost of collection in the short and long term with input from agencies and stakeholders, if the information architecture approach is used. According to the National Information Standards Organization (NISO), the design and development of a controlled vocabulary can help to ensure that concepts are described distinctly by eliminating ambiguity and controlling for synonyms. As a result, the use of a controlled vocabulary can help agencies identify programs and collect associated program information in facets more consistently. Differences in how agencies use terms and concepts—especially those related to “program,” “program area,” and “activity”—create challenges for an inventory, which requires consistent information to be useful. As mentioned earlier, the Glossary of Terms Used in the Federal Budget Process defines “program” generally as an organized set of activities directed toward a common purpose. However, variations in agency organizational structures, missions, history, and funding authorities—as well as in the purposes for which agencies create or use program information, such as budgets or performance reports—can result in differences in how agencies organize and group activities using different terms. To illustrate these differences, table 2 provides our observations on how Education, USAID, and DHS used the concepts and terms of program area (for collections of related programs), program, and activity (for more specific activities within a given program) in budget documentation. Each of these three agencies includes these concepts in their documents, but how they are organized and what they contain differs. The varied uses of these terms within and across agencies—all from agency budget documentation—illustrate one challenge of consistently using words such as “program” and the benefits of creating a controlled vocabulary that could move agencies toward a common understanding and more consistent application of these terms for an inventory. Because agencies have flexibility in deciding what activities constitute a program, an information architecture approach that would focus attention at the facet level would help make the inventory information more consistent. If consistent information is collected, then it can be more easily compared, whether or not the identification of programs is similar across agencies. Facets and the information within them can provide the structure that will allow the inventory to contain consistent information within and across agencies, aiding comparability of information. Existing guidance points to potential facets and definitions for them, including controlled vocabularies. OMB guidance, as well as requirements for the DATA Act, for example, identify and define facets related to program budget and performance, including performance goals. Program operations information was not included in OMB’s guidance for the initial inventory; however, existing taxonomies in use by federal agencies provide examples of facets and controlled vocabularies for program operations, including how programs operate and whom they serve. For example, the CFDA provides established lists to define eligible grantees and beneficiaries and includes questions to guide agency officials in collecting this information. With the use of controlled terms, comparisons can be made across programs that serve similar populations or share program eligibility. Table 3 shows potential program operations facets and comparable information collected in the CFDA. To be included in the federal program inventory, the controlled vocabulary and corresponding definitions for facets related to program operations would be assessed against standards, as described below. With many different possible facets and the associated costs of collecting specific program information within them, OMB would need to determine priorities and time frames for required facets in consultation with agencies and stakeholders, if an information architecture approach is used. Determining relationships among selected concepts and terms can add to the usefulness of the information in the inventory. The controlled vocabulary can help to show relationships, such as if definitions of some terms refer to other terms or if programs are related. Taxonomies can bring additional structure by linking program facets with one another, promoting functionality and usefulness. Taxonomies tend to be hierarchical, but sometimes are organized in other ways. For example, a hierarchical structure might apply to an agency’s organizational structure in which each related facet is a subset (e.g., agency, bureau, office), and a network structure might be more appropriate for associating categories of information for which there are not specific subcategories, such as facets containing budget and performance information. Finally, decision rules that specify what collections of activities constitute a program for the purposes of the inventory help ensure consistency and comparability of information within the inventory. Program information and data could then be collected for each individual program facet. How broadly or narrowly agencies identify programs for the inventory will affect its usefulness. For example, an approach to inventory development that groups many activities under a relatively small number of program names could have limited usefulness, if it results in a low level of transparency over the full range of activities, functions, and costs that occur within that area. Conversely, an approach to an inventory that groups activities narrowly and includes a comparatively large number of programs could result in greater transparency and usefulness, but would likely create significant costs for agencies to identify, create, and maintain. Decision rules for determining what should be identified as a program for purposes of the inventory will need to balance usefulness and costs, if this approach is implemented. Further, agencies will need to consider how best to organize their activities for inclusion as programs in the inventory, which could present a challenge. The three agencies we reviewed have different organizational structures, such as strategic, programmatic, or budget structures that could be used to organize inventory programs. For example, Education generally has consistent, program-focused alignment across its organizational structures. DHS has historically not had as consistent a program focus across its structures—given its origins from many different agencies—but has recently more closely aligned its budget and program structures. USAID’s different structures have presented agency efforts in multiple ways, including at a country level and also at a broader, mission- focused level such as combating malaria or providing basic education. (See appendix III for more information on how these differences can affect program identification.) Decision rules will need to be established to help agencies present programs in the inventory in a way that is as consistent as possible, given these differences, which could pose different challenges across agencies. Because agency activities and structures differ, as do user needs, agencies implementing an information architecture approach would need to clearly illustrate the relationships among individual—or groupings of— activities and what is included under a designated program name. This will provide transparency over how the agency applied decision rules and what an agency included under that program, though some agencies may have greater challenges doing so. At the three agencies, we found that variations in the ease of identifying programs often reflected agency organizational structures. For example, Education’s internal organization allows for the relatively easy identification of a consistent list of programs when using appropriation accounts and program names, in part because its appropriations are set up similar to its programs, according to agency officials. By contrast, other agency officials—including at USAID— expressed concern about linking programs to appropriations, because their programs and appropriations are not similarly structured. Appendix III provides information on how Education’s, USAID’s, and DHS’s organizational structures might affect program identification—specifically in our case study context of identifying programs using budget documentation—including a recent DHS effort to better align its budget structure with its discretionary programs. OMB and agencies could also establish decision rules on how to treat activities and funding streams that may not be clearly linked to specific programs or provide overall administrative or mission support, in order to ensure these items are treated consistently. This can include, for example, general administration, information technology-related maintenance, and general construction. Each of the three agencies selected for our illustrative case studies had these categories of funding and expenses. For example, Education had a “Program Administration” program that accounted for over $400 million in fiscal year 2015 obligations and funded close to half of the agency’s almost 4,100 full-time employees. Education used “Program Administration” to provide administrative support across most programs and offices in the department. If an information architecture approach were used, OMB and agencies would need to determine whether and to what extent these kinds of expense categories should be identified as distinct programs for purposes of the inventory or whether they should be allocated across programs. Taxonomies can bring additional structure to an inventory by linking program facets with one another, promoting functionality and usefulness. Figure 2 shows how this can be applied to an individual program. In this example, program information for Education’s Promise Neighborhoods program is collected into potential facets related to the program’s organization, budget, performance, and operations. Once program information has been collected into facets for multiple programs, a taxonomy allows for the comparison of information across programs, as well as the potential to aggregate—or disaggregate— program information at an appropriate level to facilitate a variety of uses. Table 4 provides an illustration of selected programs in three federal agencies providing early learning or child care services for different age groups. In the federal program inventory, comparisons could be made across or between multiple facets. In this case, the information included within the activities/services and beneficiary facets are compared to identify programs with similar characteristics. For example, sorting programs by information included in the two facets in table 4 would reveal that the Promise Neighborhoods and the Comprehensive Literacy Development Grants (formerly Striving Readers) programs both provide early learning services and have a larger age range of children as intended beneficiaries. However, collecting program information for each facet may pose challenges for agencies. As we developed our hypothetical inventory, we found that a greater range of program information was readily available for some of the selected programs than for others—often depending on the extent to which programs were included by name in the documents we reviewed (e.g., budget documents, performance and strategic plans, and agency websites). For those programs that were included in the CFDA, for example, we were able to collect information for a number of our facets, such as functional codes that reflect program operations and coded entries for eligibility. Performance goals, including measures and targets, however, are not required in the CFDA. Likewise, for programs that corresponded to individual program activities in the federal budget, we were able to readily identify budget and financing information, such as obligations, appropriations accounts, and related program activities, although identifying this type of budget information was sometimes more difficult. We have faced similar obstacles to collecting program information in other work. For example, we were unable to identify 39 (of 58) efforts or programs in the President’s budget by name or by funding for a recent report looking at federal efforts supporting U.S. manufacturing, although we were able to report program obligations for many but not all of them after we conducted a survey of agency officials. After identifying programs and facets and determining relevant relationships, information architecture principles suggest evaluating the taxonomy of the program inventory in several ways. For instance, an evaluation could ask how well the inventory’s structure and controlled vocabulary organize and present needed information. Further, the evaluation could involve consulting with experts or comparing with existing taxonomies and standards to ensure that all the needed terms and facets are included. Evaluation of the structure and content of the inventory can involve different methodologies, including reviewing existing standards and interviewing subject-matter experts. There are a number of sources available for standards related to the organization of information using the principles and practices of information architecture, including NISO standards for controlled vocabularies. In addition to evaluating the inventory’s taxonomy and facets, it will also be necessary to evaluate the quality of the specific program information content. This includes examining the consistency and completeness of the information that agencies report. Consistently identifying program information in facets related to outcomes could help agencies identify where they have programs that have similar purposes or activities, and therefore opportunities to collaborate. Likewise, a complete inventory— including all federal efforts within each definition—could be a useful tool for addressing crosscutting issues. Specific aspects could include examining whether the information is accurate, consistent with the controlled vocabulary, properly formatted, and current. Including consistent and complete program information will help the inventory be more useful and allow users to better compare and contrast programs across broad areas with federal involvement, as we noted in our 2014 report. More generally, while the inventory is developed iteratively, continuously evaluating the extent to which it is delivering the usefulness desired by stakeholders will enhance its continued usefulness. As part of this evaluation, OMB and agencies can assess the decision rules for the identification of individual programs to determine the extent to which the resulting set of programs are identified at a level that facilitates comparisons across and within agencies. This type of evaluation could lead agencies to determine that activities should be grouped together more broadly (including more activities) or more narrowly (including fewer activities) to allow for better comparisons and increased usefulness. In some cases, activities may need to be allocated differently among programs. Likewise, an evaluation could test program identification by determining if the inventory includes sufficient breadth (in terms of an agency’s total funding) and depth (at a level that is useful for decision makers). We reviewed one narrow way agencies could identify programs—using budget documentation—and found challenges to the consistency and completeness of program information (see appendix III). These examinations can lead to improvements in the inventory over time. A well-designed inventory interface can include features to enhance the usefulness of the program information by enabling users to navigate through the content of the inventory to meet their needs. The taxonomy structure serves as the backbone and allows for the presentation of concepts, terms, and relationships dynamically. Specifically, individual facets can be used to identify potential relationships between programs and to organize information in new ways within the inventory. For example, in our hypothetical inventory the facets containing budget information for the Department of Agriculture’s Child and Adult Care Food Program can be linked to both the School Lunch and the School Breakfast programs through a common account. The ability to view these related programs gives the user more tools and information to understand how programs fit within the whole of government and relative to one another. In addition, tagging program information (e.g., attributes or characteristics) within facets in a taxonomy can help create new relationships and allow for the grouping and linking of content in new ways. For example, HHS’s Child Care and Development Fund provides child care services to children ages 5 and under through grants that also support low-income families and children with disabilities. This program could be tagged to highlight these and other attributes of the program collected in facets related to activities and services and to eligible beneficiaries. Then, a user interested in similar programs could click on a tag (e.g., early learning services) that could generate a list of programs that also have that tag. Figure 3 depicts how such an interactive tool could allow a user to identify programs with the same tagged activities. The interface could also include predesigned output formats for program information. This feature could allow for the creation of program summaries for individual programs or fact sheets cross-walking certain predetermined facets, such as budget and performance information, in a user-friendly format. Figure 4 provides an illustrative example of this concept. The federal program inventory will exhibit its usability—and thus usefulness—during validation. This step tests the union of the user interface and the taxonomy to improve the usefulness of the inventory and test the inventory’s organization, structure, and general functionality. Using a variety of methods to test the inventory with the intended audience can validate design and content decisions, including any assumptions made about how users interact with data. One method is to conduct usability testing by asking users to complete a series of clearly defined tasks and monitoring how they navigate the inventory. For example, users could be asked to find a term that is grouped with other terms or to find everything they can about a particular topic. Analysis of resulting data from browsing or searching the user interface—such as the number of clicks or completion times of tasks—can reveal how the presentation and grouping of terms affects the completion of user tasks. A complementary method is to interview or conduct focus groups to obtain qualitative feedback on the usability and usefulness of the inventory. To illustrate this validation methodology, we asked congressional staff to offer their perspectives on how they might use an interactive website containing a federal program inventory with search, filtering, and other navigation capabilities. Overall, congressional staff affirmed that a searchable and sortable design with the ability to provide different levels of aggregation and disaggregation of program information would be useful for a number of tasks, including the following: informing staff quickly about programs as part of background research for various tasks; developing briefing materials for members using program information, informing congressional decisions about authorizing or reauthorizing federal programs and provisions in the tax code; answering constituent questions; identifying information related to program performance; and drawing attention to information gaps, such as if program goals or targets were not developed. The congressional staff we interviewed also stated that having links to program evaluations, especially GAO, inspector general, and Congressional Research Service reports, would be helpful for learning about program performance, as would information tags identifying direct and indirect program activities and services. Further, we shared with congressional staff a series of illustrative examples of summary sheets containing information on a number of potential program facets, including budget, performance, and operations information. (Figure 5 presented in the previous section above is one of these examples.) These staff said the types of information and organization matched what they would expect in an inventory and would want to inform their work, although they stated the inventory could be more useful if historical information were more robust than what we included. For example, they expressed a preference for at least 5 years of budget information rather than the 3 years our hypothetical inventory provided. Some staff also emphasized the importance of having strictly defined fields, such as the program history field, in order to avoid confusion and reduce subjectivity in program information. Such feedback can provide valuable insights into the design and content users find most important, the limitations they identify, and their satisfaction with the overall interface and program inventory. Incorporating the validation results into the design and content of the inventory would enhance usefulness by ultimately enabling users to better find the information they need. Validating the inventory and incorporating prioritized results can mitigate risks related to the opinions and assumptions that were necessary to create an initial inventory framework. Thus, the validation results can also serve as a roadmap for subsequent iterations of the inventory. Establishing and implementing a governance structure will help ensure the program inventory is continually maintained and useful. Governance specifically involves establishing the policies and procedures—including roles, accountabilities, standards, process methodologies—for maintaining and improving the inventory. Governance policies can also set a schedule for regular assessment of the inventory to monitor how it will meet user needs over the long term. Finally, governance can ensure that the inventory continues to meet factors related to usefulness, including accuracy, completeness, consistency, relatability, and validity. Good governance requires policies that define the process for managing inventory content, maintaining and changing the taxonomy, and establishing roles and responsibilities. Policies for managing content define the conditions under which programs and program information are added, updated, and archived or deleted. These policies also define the conditions under which the information architectures—taxonomy structure, facets, and controlled vocabulary—are revisited and updated. As such, the governance policies can establish how prospective changes are evaluated and prioritized and when to make changes. Governance of the inventory will also benefit from well-defined roles and responsibilities. This includes defining the individuals responsible for proposing and making changes to the inventory and taxonomy—both to reflect higher level changes to the purpose of the inventory and the day- to-day management of the taxonomy. In addition, implementation guidelines for each role will further clarify the expected steps by which changes to the information architecture and inventory are made. Further, governance can establish when and how processes are reviewed and updated. Documenting these roles and responsibilities will create accountability and provide a transparent process that will withstand changes in staff. Governance also includes decision rules that determine how programs are identified and information is included. In our work to develop illustrative examples of programs an inventory might include, we encountered programs with changing names and authorizations, which would require policies to ensure consistent program information be included in the inventory and kept up to date. For example, one potential Education program has had three names since it was originally authorized: (1) Striving Readers, then (2) Striving Readers Comprehensive Literacy, and most recently (3) Comprehensive Literacy Development Grants. Each version of the program has been authorized by different statutory provisions, creating complexities in tracking program information across time, and raising questions about whether it is the same program, successor programs, or three individual programs. Governance policies that include decision rules regarding whether and how to include the evolution of programs can aid the consistency and usefulness of program information over time. Leveraging existing governance policies, roles, and procedures can help to ensure that the inventory’s usefulness persists. We have previously reported that establishing a formal framework for providing data governance throughout the lifecycle of developing and implementing standards is key for ensuring that the integrity of data standards is maintained over time. There are a number of governance models, and many of them promote a set of common principles that includes clear policies and procedures for broad-based participation from a cross- section of stakeholders for managing the standard-setting process and for controlling the integrity of established standards. Ideally, a governance structure could include processes for evaluating, coordinating, approving, and implementing changes in standards from the initial concept through design, implementation, testing, and release. It would also address how established standards are maintained and ensure that a reasonable degree of agreement from stakeholders is gained. We provided a draft of this report for review and comment to the Director of the Office of Management and Budget (OMB), the Departments of Education and Homeland Security, the U.S. Agency for International Development (USAID), and the General Services Administration. USAID provided technical corrections, which GAO incorporated as appropriate. OMB agreed to consider this information architecture approach as it develops plans for the next iteration of the federal program inventory. We are sending copies of this report to the Director of OMB, the Secretaries of the Departments of Education and Homeland Security, and the Administrators of USAID and the General Services Administration, as well as interested congressional committees and other interested parties. This report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or CurdaE@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of our report. Key contributors to this report are listed in appendix V. The GPRA Modernization Act of 2010 (GPRAMA) includes a statutory provision for us to periodically evaluate and report on (1) how implementation of the act is affecting performance management at the 24 major departments and agencies subject to the Chief Financial Officers (CFO) Act of 1990, including whether performance management is being used to improve efficiency and effectiveness of agency programs; and (2) crosscutting goal implementation. This report is part of our response to that mandate. GPRAMA requires the Office of Management and Budget (OMB) to present a coherent picture of all federal programs by making information about each program available on a website. For this report, we examined how the principles and practices of information architecture can be applied for the development of a useful federal program inventory. Programs are defined in our September 2005 Glossary of Terms Used in the Federal Budget Process as “generally, an organized set of activities directed toward a common purpose or goal that an agency undertakes or proposes to carry out its responsibilities.” A federal program inventory consists of the individual programs identified and information collected about each of them. As we have reported, the usefulness of a federal program inventory depends on factors such as accuracy, completeness, consistency, reliability, and validity, among others. Our methodology involved describing the general steps that could guide the development of a useful federal program inventory using an information architecture approach, as well as assessing how the principles and practices of information architecture could be used both to identify programs and to identify, compile, and organize information within an inventory. This report is not meant to suggest requirements or best practices for developing the federal program inventory, but rather to illustrate how a particular approach could be applied to develop a useful federal program inventory. Other approaches might also be used—or could be incorporated into this framework—to develop an inventory that best addresses limitations identified in the past. To understand information architecture, we reviewed industry standards, website standards, conference and training materials, books, and leading practices. We then examined how information architecture principles can be used to create a useful federal program inventory that aligns with GPRAMA’s requirements that a website present a coherent picture of all federal programs, as well as with federal website guidelines related to providing usable information (i.e., usability). This analysis included the following steps: qualitatively analyzing the information architecture literature and interviewing information architecture practitioners to identify overarching principles; reviewing federal requirements for the usability of websites and digital services as summarized at digitalgov.gov to identify those guidelines that relate directly to one or more characteristics of a useful federal program inventory (i.e., information that is accurate, complete, consistent, reliable, and valid for its intended use); and comparing the federal policy topics we identified to the overarching principles in information architecture, and aligning information architecture principles with digitalgov.gov guidelines. To gain an understanding of the intended purpose of and potential uses for a federal program inventory, we reviewed requirements in GPRAMA, as well as OMB’s guidance for the executive branch’s initial program inventory effort and our assessment of that effort. We also interviewed current and former federal officials who were knowledgeable about prior efforts to inventory or otherwise consolidate and make publicly available information about federal programs. We reviewed state websites describing state experiences in developing program inventories to understand practices for inventorying program information at the state level, and we interviewed budget and performance officials in three states that have or are developing program inventories to understand the information contained in these inventories and its potential uses. To understand how programs could be identified and how information within the inventory could be identified, compiled, and organized, we selected individual agencies and programs to examine as case studies. As part of our effort to apply relevant principles and practices of information architecture to program identification, we developed a set of observations on using budget-related resources to identify programs that could be included in a federal program inventory. See appendix II for a summary of these observations. Specifically, we examined budget-related information resources, including agency budget justification documents and program activity data. We used budget information because most agencies followed a similar approach for their initial inventories and because GPRAMA requires inventories to include budget information for programs included in the inventory. We selected three agencies to develop these observations: the Departments of Education (Education) and Homeland Security (DHS) and the U.S. Agency for International Development (USAID). These agencies were drawn from the 24 agencies included in the initial executive branch effort to develop a federal program inventory and were selected based on a number of factors, including differences in their overall organizational structure, approach to the prior effort to develop an inventory, and the extent of the connection between their programs and the Catalog of Federal Domestic Assistance (CFDA). This allowed us to compare and contrast the agencies and the usefulness of budget information in those agencies to identify programs. Our analysis included the following: the extent to which consistent and complete lists of agency programs could be identified using budget-related information and the impact of the agency’s organizational structures on these lists; a review of the types of activities that could be characterized as programs within each agency and how activities are grouped into programs or overarching program areas with underlying programs; and a general review of the alignment of possible programs identified by budget documents with other ways agencies organize their efforts, such as performance reports and CFDA programs. We also reviewed the relationship between the budget’s program activity data for the 24 agencies included in the initial executive branch effort and programs listed in the CFDA to obtain more insights into the different contexts in which agencies identify agency programs and present program information. We attempted to determine the extent to which CFDA programs were aligned with budget program activities by identifying as a possible match any specific CFDA program that was similar in title or funding amount with specific program activities that shared an appropriations account number. The CFDA is a key resource to identify domestic assistance programs. While not all agency programs would be included in the CFDA, agencies submit programs for inclusion, so they have in essence identified those as programs. For this analysis, we identified the total number of CFDA programs for selected agencies and the number matched to a program activity listed in the federal budget. In a more in-depth analysis of Education’s program activities, we also identified scenarios where (1) program activities were a one-to-one match in name and dollars with information in the CFDA, (2) one program activity matched with a number of CFDA programs, (3) multiple program activities funded what the agency called a single program in the CFDA, and (4) there was no match. As part of our effort to apply relevant principles and practices from information architecture to identify, compile, and organize information about federal programs, we identified information that can be included in a useful federal program inventory, tested the collection and organization of that information by developing a hypothetical inventory with selected programs, and used the hypothetical inventory to illustrate aspects of how a federal program inventory could be validated using an information architecture approach. Finally, we looked to our prior work to identify relevant practices in information and data governance. More specifically, our analysis included the following: Identifying needed information: We identified the types of information about programs that could be included in an inventory to make it useful (e.g., budget, performance, and operations information) by examining OMB and GAO guidance for developing program lists, including OMB’s guidance for the first inventory effort; examining state efforts to develop and use inventories; interviewing potential users; and summarizing examples of the types of program information that have been identified in our past work as being useful. Developing definitions and a controlled vocabulary: In order to develop our hypothetical inventory, we identified needed information (i.e., facets) to include, as well as definitions for these facets (the “controlled vocabulary”). To identify and define terms that were not included in the initial executive branch’s inventory, we looked at other taxonomies or guidance, including controlled vocabularies used by the Congressional Research Service and by the Education Resources Information Center (ERIC), as well as CFDA guidance for agency officials, OMB guidance on the reporting of performance goals, our Glossary of Terms Used in the Federal Budget Process, and the Digital Accountability and Transparency Act and related OMB guidance. Selecting individual programs for our hypothetical inventory: We collected program information for a number of individual programs from a variety of sources, including: (1) lists of budget program activities for Education, DHS, and USAID; (2) programs or efforts identified as part of our recent efforts to look at programs with common activities, services, beneficiaries, or outcomes; and (3) our efforts to examine tax expenditure programs. To ensure our set of illustrative programs included a range of programs, we selected programs with certain characteristics (e.g., program size in terms of budget, from different agencies, based on availability of information, etc.). When multiple programs were available based on certain attributes, we used a simple random selection to choose specific programs to include in our set of illustrative programs. Collecting program information: For selected individual programs, we collected program information from budget and performance documents, as well as from agency websites and the CFDA. For programs selected from our existing work, we also leveraged reported information, and we collected information about any challenges related to identifying programs or collecting program information. Developing a hypothetical inventory: We tested the development of a hypothetical inventory for six programs drawn from a recent report on early education and child care programs by including individual facets in an online taxonomy to demonstrate how information could be sorted by facets and presented in different ways. Validating the form and content of the hypothetical inventory: To illustrate aspects of the validation step in the information architecture approach to developing an inventory, we developed sample materials to illustrate what the content and structure of an inventory might include, and we presented these materials to congressional staffers with committees overseeing programs providing early education and child care services (i.e., the Senate Committee on Health, Education, Labor, and Pensions and the House Committee on Education and Workforce). We solicited feedback on the form and content of our hypothetical inventory and collected information regarding potential uses and related needed information. We conducted this performance audit from July 2016 to September 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Recommendations for the Office of Management and Budget from GAO-15-83: Government Efficiency and Effectiveness: Inconsistent Definitions and Information Limit the Usefulness of Federal Program Inventories (Oct. 31, 2014) We examined budget documentation to identify possible programs in three agencies: the Departments of Education (Education) and Homeland Security (DHS) and the U.S. Agency for International Development (USAID). To identify programs, we reviewed budget documentation as an illustrative starting point for several reasons. First, most agencies used budget information to help structure their initial inventories. Second, the GPRA Modernization Act of 2010 requires specific budget information to be included in an inventory. Third, agencies have significant budget related information, such as congressional budget justifications, as well as the federal budget’s appropriations and program activity accounts. We did not conduct a full evaluation of all of the ways agencies could identify programs—or all of the characteristics needed for program identification to be useful—as we did not build a full inventory and such an evaluation was outside of our scope. Education, DHS, and USAID budget documents all provided information to identify possible programs. Education’s budget documentation had a specific list of programs, organized by program area, with a separate cross-walk between office, program, and statutory authority, allowing for straightforward identification. Both DHS and USAID documentation identified broader areas of effort and named dozens of programs. However, the set of programs we generated from budget documentation illustrated the challenge of identifying programs for an inventory. We evaluated the set of possible programs for consistency and completeness because we have previously reported the importance of those characteristics, among others, to make an inventory useful. We found differences across agencies in the extent to which their budget documents could generate consistent and complete lists of programs, which could result from different organizational structures. Specifically, the programs listed in budget documents did not always present agency programs consistently across other agency resources, such as performance documents or agencies’ congressional appropriations. Further, budget documentation did not always allow for the complete identification of programs for the inventory in terms of depth (at a level that would be sufficiently useful for decision making), although it did have complete breadth (as it contained each agency’s total funding). Overall, Education’s closer alignment between budget, program, and other organizational structures generally made the identification of programs with budget documents more consistent, limiting challenges. Education’s budget, appropriation, program, and performance structures were all similar, and information about programs was presented consistently across information resources such as appropriation, performance, and Catalog of Federal Domestic Assistance (CDFA) information. Further, Education presented programs with a set of activities at a level that could be useful for decision making in an inventory. For example, Education’s Promise Neighborhoods program supported awards to local partnerships to develop and implement comprehensive, neighborhood-based plans for meeting the cradle-to-career educational, health, and social service needs of children in high-poverty communities. Although there were a number of separate activities within the program, its neighborhood focus presented information that could help decision makers evaluate the activities as a group in light of that focus. Education’s hierarchical structure generally allowed for a clear identification of relationships between agency offices, program areas, and individual programs. Also, other Education documentation presented a specific program list by administrative office and provided a cross-walk between those programs and agency goals, which made it easier to understand how specific programs contribute to the achievement of those goals. By contrast, USAID budget documents presented greater challenges in identifying possible programs, as its documents were less aligned with a specific program structure and offered less consistent and precise identification. USAID did not have a specific, complete set of programs in budget documents. Rather, USAID identified specific funding accounts, and included some highlights—but not systematic information—about more specific efforts. For example, USAID’s budget documentation included the broad objective Peacekeeping Operations, which had a number of highlighted efforts, including the following: Global Peace Operations Initiative ($71 million): supports U.S. contributions to international peacekeeping capacity building by providing training and equipment, as well as supporting deployment of troops and evaluations of effectiveness. South Sudan ($36 million): supports rebuilding the military and support for the Sudan People’s Liberation Army, including training and non-lethal equipment. Multinational Force and Observers ($28 million): supports efforts to supervise the implementation of security provisions of the Egyptian– Israeli Peace Treaty. As part of USAID’s Performance Report within its 2015 budget justification, the agency also presented information by program area. For example, its Peace and Security objective included six program areas: (1) Counter-Terrorism; (2) Combating Weapons of Mass Destruction; (3) Stabilization Operations and Security Sector Reform; (4) Counter Narcotics; (5) Transnational Crime; and (6) Conflict Mitigation and Reconciliation. USAID provided information about its work in these areas, but had no specific list of its programs. USAID officials noted that the program areas listed in the justification broadly relate with the program areas set forth in the Department of State’s Standardized Program Structure and Definitions. USAID creates its budget justification jointly with the Department of State. USAID had different structures across its congressional budget justification, performance structures, and program activities, though its budget justification presented a high-level funding crosswalk between its budget and performance structures. Further, USAID presented its efforts at an individual award level online, including at foreignassistance.gov, but that may be at too narrow a level to be useful for decision making when included in an inventory. The range of structures and ways to present information on the activities of USAID provides transparency and accountability on how agency funds are being used. However, it is our observation that this flexibility and range of methods for presenting information made it challenging to identify through budget documents specific programs that could be included in an inventory, if that inventory were intended to link specific programs and appropriation amounts. USAID officials noted that the agency had specific definitions of program area, program, and activity in documents other than the agency’s congressional budget justification. Like USAID, DHS did not present a comprehensive program list in its budget documents: DHS had disparate budget, program, and other agency structures, in its case borne out of the parts of different agencies that combined to create DHS, according to agency officials. However, DHS recently aligned its discretionary programs with a standardized budget structure, and now has greater similarities across its budget justifications and appropriations structure. Specifically, DHS established four standard budget categories to be used by all of its mission components: (1) Research and Development; (2) Procurement, Construction, and Improvements, (3) Operations and Support, and (4) Federal Assistance. DHS defined each budget category and created guidance on what activities would typically be included. DHS then created six sub-categories—at the level of the budget’s program, project, and activity account—for more specific funding areas where applicable. DHS also approved other individual categories that better reflect the components’ distinct missions. For instance, in the Federal Emergency Management Agency, DHS used the standard sub-category of Mission Support along with the individual subcategory of Preparedness and Protection. Moreover, DHS added more specific information to the budget program activity for its 2018 budget justification below the program, project, and activity account level. These additions could provide more insight to what DHS considered programs and more consistently link the budget with program information, which could help provide better information at a level useful for decision making in an inventory. To obtain insights into the different contexts in which agencies identify programs and present program information, we compared 24 agencies’ programs listed in the Catalog of Federal Domestic Assistance (CFDA)— a key resource to identify domestic assistance programs—and their program activity information from the agencies’ budgets. Overall, we observed that the CFDA and budget program activities listings could be helpful in supporting the development of a federal program inventory. The extent of their usefulness will vary by agency, in part, because agencies we spoke to did not view the CFDA as fully consistent with their programs. Based on our analysis, neither resource would be satisfactory for creating a definitive list of programs for any agency for purposes of an inventory. Not all agency programs would be included in the CFDA since the purpose of the catalog is to assist potential applicants to identify and obtain general information about domestic assistance programs. However, with over 2,000 programs included and information about each of those programs, the CFDA could serve as a valuable resource in efforts to develop a federal program inventory and collect program information. Using a text analytics methodology that compared the names and funding amounts between CFDA programs and the budget’s program activities, we attempted to determine the extent to which CFDA programs were clearly aligned with budget program activities. We observed that the relationships between CFDA programs and program activities within the same appropriation account varied significantly by agency but overall was unclear for all agencies (see table 6). We also observed that there are multiple different relationships between a CFDA program and agency program activities. Figure 6 presents illustrative examples of the different relationships that CFDA programs might have with specific program activities. These complex and uncertain relationships could affect some of the matched program numbers in table 5 (above) because multiple budget program activities or multiple CFDA programs could be matched. In addition to the above contact, Brian James (Assistant Director), Molly Laster (Analyst in Charge), Andrew Nelson, and Michelle Serfass made key contributions to this report. Leia Dickerson, Steven Flint, Hedieh Fusfield, Ellen T. Grady, Benjamin T. Licht, Drew Long, Steven Putansu, Robert Robinson, A.J. Stephens, James Sweetman, and John Yee also contributed.
Each year the federal government spends trillions of dollars through dozens of agencies and thousands of federal programs. Given its sheer size and scope, providing a clear and complete picture of what the federal government does and how much it costs has been a challenge in the absence of a comprehensive resource describing these programs. The GPRA Modernization Act of 2010 (GPRAMA) requires the Office of Management and Budget (OMB) to present a coherent picture of all federal programs by making information about each program available on a website to enhance the transparency of federal government programs. Congress included a provision in GPRAMA for GAO to review the implementation of the act. GAO has chosen to conduct this study now because OMB has not yet developed an inventory that meets GPRAMA requirements. For this report, GAO addresses how one potential approach for organizing and structuring information—the principles and practices of information architecture—can be applied to develop a useful federal program inventory. To present illustrative examples of what programs and program information could be included in an inventory, GAO examined budget, performance, and other resources that could be used to develop an inventory. These examples were also used to illustrate the potential content and structure of an inventory and to identify any challenges. GAO is not making recommendations in this report. We provided a draft of this report for review and comment to the Director of the Office of Management and Budget (OMB), the Departments of Education and Homeland Security, the U.S. Agency for International Development (USAID), and the General Services Administration. USAID provided technical corrections, which GAO incorporated as appropriate. OMB agreed to consider this information architecture approach as it develops plans for the next iteration of the federal program inventory. A useful federal program inventory would consist of all programs identified, information about each program, and the organizational structure of the programs and information about them. The principles and practices of information architecture—a discipline focused on organizing and structuring information—offer an approach for developing such an inventory to support a variety of uses, including increased transparency for federal programs. GAO identified a series of iterative steps that can be used to develop an inventory and potential benefits of following this approach. GAO also identified potential challenges agencies may face in developing a full program inventory. To identify potential benefits and challenges to applying these steps, GAO developed a hypothetical inventory, focusing on three case study agencies—the Departments of Education (Education) and Homeland Security and the U.S. Agency for International Development. Potential benefits of using such an approach to develop a federal program inventory include the following: Stakeholders have the opportunity to provide input into decisions affecting the structure and content of the inventory. For example, congressional staff told GAO that an inventory with 5 years of budgetary trend data on programs would be more useful than 3 years of data. A range of information through program facets is available for cross-program comparisons, such as budget, performance, beneficiaries, and activities. An inventory creates the potential to aggregate, disaggregate, sort, and filter information across multiple program facets. For example, the figure below illustrates how program facets could be used to identify programs that provide similar services—in this case, early learning and child care services—and discover budget and other information for each of the programs identified. An iterative approach to development and governance of the federal program inventory can result in improvements and expansions of the inventory over time. GAO also identified potential challenges agencies may face when using this approach to develop an inventory, including the following: Challenges in determining how agencies should identify and structure their programs in an inventory will need to be addressed, including how to treat spending categories not clearly linked to specific programs, such as administrative support. This may occur because agencies vary in their missions and organizational and budget structures and in how they organize their activities. Challenges in collecting information for each program facet may occur for some agencies and programs. This may happen because a greater range of program information may be more readily available for some programs than others. GAO found that this was often dependent on the extent to which certain programs were included by name in budget documents, strategic plans, and agency websites. Challenges related to determining what should be identified as a program and the structure and content of the inventory will need to be balanced with usefulness and costs. Agencies may need to weigh the costs that they might face in collecting and reporting program facet information as they establish priorities.
The paper-intensive process used by DOD to reimburse Army Guard soldiers for their travel expenses was not designed to handle the dramatic increase in travel vouchers since the terrorist attacks of September 11, 2001, and the subsequent military activity. The increased operational tempo resulted in backlogs in travel voucher processing as DFAS CTO struggled to keep up with both the increased volume and complexity of the travel vouchers submitted. For example, the monthly volume of travel vouchers being submitted to DFAS CTO increased from less than 3,200 in October 2001 to over 50,000 in July 2003 and remained at levels over 30,000 through September 2004. To its credit, to address the large volume of vouchers received and the unprocessed backlog, DFAS increased its staffing by over 200 new personnel and reported an average processing time of 8 days for its part of the process in September 2004. While the inefficient, manual travel and reimbursement process may have offered some capability to process travel vouchers during periods of low activity when relatively few Army Guard members were mobilized, the current increased operational tempo has strained the process beyond its limits. As shown in figure 1, the monthly travel voucher volume has remained above 30,000 since the July 2003 peak. In addition to the rising volume, the increased complexity of the vouchers received further slowed down the process. As military activity increased for Operation Iraqi Freedom and Army Guard, Army Reserve, and active Army soldiers were preparing for duty, not all of the installations to which Army Guard soldiers were assigned had available government housing. As a result, the soldiers were housed off-post in commercial hotels or apartments. This created a number of novel situations that were not specifically addressed in regulations, as discussed later. During this time frame, DFAS CTO staffing levels were not keeping pace with the rising volume of vouchers. However, while DFAS CTO employed less than 50 personnel in October 2001, this number more than doubled by February 2003 and was increased further to about 240 in June 2003, including 83 Army Guard and Army Reserve soldiers, as shown in figure 2. A DFAS CTO official told us that the office was not properly staffed to process travel vouchers at the beginning of 2003 when the volume started to increase. Inadequate staffing and the time necessary to train new staff created a backlog of travel vouchers at DFAS CTO, ballooning to over 18,000 vouchers in March 2003. The majority of soldiers in our 10 case study units reported problems related to reimbursements for meal expenses that included late payments, underpayments, and overpayments resulting in debts to some soldiers in excess of $10,000. For example, we estimated that about $324,000 was paid more than a year late to 120 soldiers for meal expenses based on the proportional meal rate for their locality. One individual responsible for submitting his unit’s vouchers to DFAS CTO told us that he called the process “the travel voucher lottery” because “you never knew whether, or how much, you might get paid.” These issues were caused by weaknesses in the process used to pay Army Guard travel reimbursements; the human capital practices in this area, including the lack of adequate training; and nonintegrated automated systems. Table 1 summarizes the experiences of Army Guard soldiers in 10 units. Further details on our case studies are included in our companion report. During our audit of selected travel vouchers, we identified some that were paid as much as 16 months after travel ended. Table 2 shows examples of the extent of delays experienced by soldiers in obtaining payment for travel expenses. In another instance, Army Guard soldiers called to federal duty to provide security at the Denver International Airport in early 2002 experienced significant delays in getting reimbursed for travel expenditures. The soldiers were provided lodging but not meals and were not authorized per diem for meals on their orders. More than a year elapsed during which the Army Guard Adjutant General with authority over the respective soldiers and Army National Guard Bureau officials worked to obtain and provide the proper authorization to reimburse all the soldiers’ travel expenses. In the interim, Army Guard soldiers experienced financial hardships. For example, one soldier’s family had to rely on the spouse’s salary to pay bills, and another’s child support payments were late or less than the minimum required payments. Deficiencies in three key areas—process, human capital, and systems— were at the core of the travel and reimbursement problems we identified. Policies and guidance, the foundation of the process for authorizing travel entitlements and reimbursements, were not always known by the mobilized soldiers nor were they well understood by local base personnel, and the authorizations were not documented on their mobilization orders or travel orders. Human capital weaknesses included a lack of leadership and oversight in addition to inadequate training. Further, the lack of systems integration and automation along with other systems deficiencies contributed significantly to the travel reimbursement problems we identified. The lack of clear procedural guidance contributed to the inaccurate, delayed, and denied travel reimbursements we identified and created problems not only for Army Guard soldiers but for numerous other personnel involved with authorizing travel entitlements. Prior to September 11, 2001, most travel guidance addressed relatively routine travel for brief periods and was not always clearly applicable to situations Army Guard soldiers encountered, particularly when they could not avail themselves of government-provided meals due to the nature of their duty assignments. In October 2001, although the Army issued new guidance that was intended to address travel entitlements unique to Army and Army Guard soldiers mobilized for the war on terrorism, it was not well- understood. Furthermore, inappropriate policy and guidance on how to identify and pay soldiers entitled to late payment interest and fees because of late travel reimbursement meant that DOD continued to be noncompliant with TTRA. We found a number of cases in which soldiers should have been paid late payment interest and indications that thousands more may be entitled to late payment interest. We found that a key factor contributing to delays and denials of Army Guard reimbursements for out-of-pocket meal expenses was a lack of clearly defined guidance. We noted that the existing guidance (1) provided unclear eligibility criteria for reimbursement of out-of-pocket meal expenses, (2) lacked instructions for including meal entitlements on mobilization orders, and (3) contained inadequate instructions for preparing and issuing SNAs. Two primary sources of guidance used by both Army Guard soldiers and travel computation office personnel for information on travel entitlements were (1) the Army’s personnel policy guidance (PPG) for military personnel mobilized for Operations Iraqi Freedom, Enduring Freedom, and Noble Eagle; and (2) DOD’s Joint Federal Travel Regulation (JFTR). We found that both Army Guard soldiers and travel computation personnel had difficulty using these sources to find the information necessary about the rules regarding travel-related entitlements. Table 3 shows the sources of common problems related to meal expense reimbursements experienced by soldiers in our case studies. Unclear eligibility criteria. We found that guidance did not adequately address some significant conditions that entitled a soldier to reimbursement of authorized meal expenses. For example, although the JFTR entitled soldiers to reimbursement for meal expenses when transportation was not reasonably available between government meal facilities and place of lodging, the term “reasonably available” was not defined. The PPG directed the maximum use of installation facilities, and if not feasible, then “multi-passenger vehicles should be used” to transport soldiers to installation facilities. However, the PPG is silent regarding what constitutes adequate transportation, particularly when transportation to government meal facilities is necessary for Army Guard soldiers who cannot be housed in government facilities. As discussed in our companion report, we found disagreements between the soldiers and their command officials about the adequacy of transportation to government meal facilities and their entitlement to get reimbursed for eating at commercial facilities closer to their lodgings. Without clear guidance on these issues, Army decisions will continue to appear arbitrary and unfair to soldiers. Lack of specific entitlements on orders. Army and Army Guard policies and procedures do not provide for mobilization orders issued to Army Guard soldiers to clearly state that these soldiers should not be required to pay for meals provided to them at government dining facilities. As a result, we noted instances in which mobilized soldiers arrived at government mess halls carrying mobilization orders that did not specifically state that the soldiers could eat free of charge and were inappropriately required to pay for their meals. Consequently, many Guard soldiers were unable to obtain reimbursement for their out-of-pocket costs in a timely manner. The PPG states, “TCS soldiers who are on government installations with dining facilities are directed to use mess facilities. These soldiers are not required to pay for their meals.” In addition, the PPG states, “Basic Allowance for Subsistence will not be reduced when government mess is used for soldiers in a contingency operation.” As such, an Army Guard soldier called to active service is entitled to eat at a government mess hall without charge and concurrently entitled to receive BAS as part of his military pay. However, the PPG does not provide guidance addressing the content of mobilization orders for Army Guard soldiers with respect to meal entitlements. In response to questions we posed to officials representing the Mississippi Adjutant General’s office regarding why mobilization orders did not include adequate provisions about food entitlements, they explained that the individual mobilization orders that are prepared by the Adjutant General’s staff are very basic and include only the travel allowances and actions that are necessary to get the individual from the home station to the mobilization station. The Adjutant General office received no guidance on what should be stated in the orders with respect to soldiers eating free of charge at government installations or any other conditions that may entitle Army Guard soldiers to per diem to compensate them for their out- of-pocket meal costs. In addition, our companion report provides examples where Army officials were not always aware that Army Guard soldiers called to active duty were entitled to BAS in addition to meal entitlements while they were serving under mobilization orders or temporary change of station (TCS) orders. Confusing, nonstandard SNAs. Lack of standardization and changing guidance has resulted in SNAs of various form and content, signed by officials at different levels of authority. Consequently, travel computation office reviewers were unable to consistently determine the validity of SNAs. Our work identified travel computation office reviewers who rejected soldiers’ requests for reimbursements even though they were supported by valid SNAs. The most recent PPG guidance authorizes the installation commander to determine whether to issue an SNA based on each unit’s situation and the availability of government housing. The guidance states that when government or government-contracted quarters are not available, soldiers will be provided certificates or SNAs for both lodging and meals to authorize per diem. However, the guidance does not specify the form and content of the SNAs. Consequently, we found that the form of the SNA and the content of the information on the form varied at the discretion of the issuing command. For example, one installation stamped the soldiers’ orders and handwrote an SNA identification number in a block provided by the stamp. Another location provided a written memo that stated that the meal component of per diem was authorized because there were no food facilities at the government installation. Another provided a single SNA with a roster attached that listed the names of the soldiers who were authorized per diem. The variety of SNA formats can cause confusion for the soldier, who does not know what documentation is needed for reimbursement and whether the travel computation office will accept it. The travel computation office personnel can also be confused about the criteria for a valid SNA. Our work found instances in which installation commands denied soldiers’ requests for SNAs. In response to our inquiries, we found that commands do not generally document their rationale for denying SNAs and there is no requirement for them to do so. This lack of documentation can leave soldiers even more confused and frustrated when seeking answers as to why their requests for per diem were denied. GAO’s Standards for Internal Control in the Federal Government require the maintenance of related records and appropriate documentation that provide evidence of execution of control activities. Inappropriate policy and guidance, issued by DFAS Indianapolis, combined with the lack of systems or processes designed to identify and pay late payment interest and fees, leave DOD in continued noncompliance with TTRA. As a result, through at least April 2004, DFAS Indianapolis had made no required payments of late payment interest and/or late payment fees to soldiers for travel reimbursements paid later than 30 days after the submission of a proper voucher. For example, of 139 individual vouchers we selected to determine why these took a long time to process, we identified 75 vouchers that were properly submitted by Army Guard soldiers that should have received late payment interest totaling about $1,400. In addition, DFAS data showed indications that thousands of other soldiers may be due late payment interest. For example, during the period October 1, 2001, through November 30, 2003, dates in the DFAS Operational Data Store showed that about 85,000 vouchers filed by mobilized Army Guard soldiers were paid more than 60 days after the date travel ended. If the dates on these vouchers were correct, the soldiers who submitted proper vouchers within 5 days of the date travel ended would be entitled to late payment interest if they were not paid within the 30-day limit. TTRA and federal travel regulations require the payment of a late payment fee consisting of (1) late payment interest, generally equivalent to the Prompt Payment Act Interest Rate; plus (2) a late payment fee equivalent to the late payment charge that could have been charged by the government travel card contractor. Late payment interest and fees are to be paid to soldiers if their reimbursements are not paid within 30 days of the submission of a proper voucher. Although DFAS issued guidance related to TTRA in April 2003, interpretation of the guidance limited the payment of late payment interest and fees to only the final settlement travel voucher for all travel under a particular travel order. This practice contributed to continued noncompliance with the law because it effectively excluded large numbers of monthly or accrual vouchers from consideration of late payment interest and fees. As a result of our work, in May 2004 DFAS clarified that all travel voucher reimbursements are subject to late payment interest and fees. However, subsequent to DFAS’s dissemination of its May 2004 clarification guidance, we found late vouchers for which DFAS did not pay late payment interest and fees. For example, the final vouchers for 63 soldiers with the Georgia Army National Guard’s 190th Military Police Company were processed late in April 2004 without payment of late payment interest or fees, even though they were covered by DFAS guidance issued in 2003. The payments were made a total of 81 days after the supervisory signatures, thus making the payments 51 days over the 30 days allowed for payment. We notified DFAS officials of the oversight and they subsequently made the interest payments. With respect to human capital, we found weaknesses including (1) a lack of leadership and oversight and (2) a lack of adequate training provided to Army Guard soldiers and travel computation office examiners. GAO’s Standards for Internal Control in the Federal Government state that effective human capital practices are critical to establishing and maintaining a strong internal control environment. Specifically, management should take steps to ensure that its organization can promptly identify problems and respond to changing needs, and that appropriate human capital practices are in place and operating effectively. Without an overall leadership structure in place, neither the Army nor the Army Guard had developed and implemented processwide monitoring and performance metrics necessary to promptly identify and resolve problems causing late-paid travel vouchers. We also found that lack of adequate training for soldiers and newly hired DFAS CTO personnel was a contributing factor to some travel voucher processing deficiencies. No one office or individual was responsible for the end-to-end Army Guard travel reimbursement process. The lack of overall leadership and fragmented accountability precluded the development of strong overarching internal controls, particularly in the area of program monitoring. Neither the Army nor the Army Guard were systematically using performance metrics to gain agencywide insight into the nature and extent of the delays to measure performance and to identify and correct systemic problems. Our Standards for Internal Control in the Federal Government require agencies to have internal control procedures that include top-level reviews by management that compare actual performance to expected results and analyze significant differences. As shown in figure 3, internal reports prepared by DFAS CTO show that missing travel orders was the primary reason why it did not accept vouchers for payment. DFAS CTO reported that it rejected about 104,000, or approximately 17 percent, of 609,000 vouchers during the period July 2003 through September 2004, with missing travel authorizations accounting for over half of the rejected vouchers. While this churning process appeared to be a primary factor in payment delays and soldier frustration, DFAS CTO, Army, or Army Guard offices had not performed additional research to determine the root cause of this and other voucher deficiencies. Similarly, our analysis of a selection of individual travel vouchers also disclosed that some vouchers were returned to soldiers because of missing documentation or the lack of required signatures. However, neither DOD management officials nor we could determine the root cause of all instances of missing information. Some soldiers told us that DFAS CTO lost documentation that they had submitted. DFAS CTO also experienced problems with faxed vouchers, which caused vouchers and supporting documentation not to be printed and processed in some cases. According to a DFAS CTO official, DFAS was unaware that faxed vouchers were not printing until a soldier complained that DFAS was not receiving his faxes. DFAS did not monitor incoming faxes, even though it reported that faxed travel vouchers account for approximately 60 percent of the total mobilized Army Guard and Reserve travel vouchers it received. These problems obstructed the normal handling of a number of those vouchers. In an effort to resolve this problem, DFAS CTO, in March 2004, ceased relying on an automatic print function of the fax system software and began manually printing vouchers. As shown in figure 4, our audit of a nonrepresentative selection of 139 travel vouchers (69 computed by DFAS CTO and 70 by USPFOs) found significant delays occurred between the date of the reviewer’s signature and the date that the travel computation office accepted the voucher. Some of these delays were caused by the time needed to correct vouchers that were deficient and resubmit them to DFAS CTO or another USPFO travel computation office. We determined that the travel computation office rejected 32 of the 72 travel vouchers delayed for more than 3 days because of missing documentation or the lack of required signatures and sent them back to the soldiers for corrections. A lack of documentation or other information prevented us from determining the reason for delays of more than 3 days for the remaining travel vouchers. The Army’s lack of processwide oversight, including monitoring of the rejection and return of vouchers by DFAS CTO and other travel computation offices, resulted in undetected delays in reimbursement, leading to unnecessary frustration with the Army’s travel and reimbursement process and potential financial difficulty for the soldier. Further, without establishing and monitoring program metrics, management had no assurance that it had identified where the breakdowns were occurring and could not take the appropriate steps to resolve any identified problems. For example, although the Army relied on the individual unit reviewer for assurance that travel vouchers were properly reviewed and transmitted promptly to the travel computation offices, the Army did not establish and monitor performance metrics to hold these reviewers accountable for their critical role in the process. Further, although metrics were available on the average time DFAS CTO took to pay travel vouchers after receipt, the Army did not have statistical data on supplemental vouchers that could help provide additional insight into the extent and cause of processing errors or omissions by voucher examiners, unit reviewers, or Army Guard soldiers. Several of our case studies indicate that accuracy may be an important issue. For example, one method DFAS CTO uses to correct a voucher error or omission is to process a supplemental voucher. According to DFAS data, DFAS CTO processed about 251,000 vouchers related to Army Guard soldiers mobilized during the period October 1, 2001, through November 30, 2003, of which over 10,600 were supplemental vouchers. However, DFAS CTO officials could not tell us how many of these were due to errors or omissions by DFAS examiners or other factors. Our audit of 69 supplemental vouchers for the California 185th case study unit showed that 41 were due to DFAS CTO errors and the remaining 28 were due to errors or omissions on the part of the soldiers. Finally, we noted that although DFAS CTO established a toll-free number (1-888-332-7366) for questions related to Army Guard and Reserve contingency travel, DFAS did not have performance metrics to identify problem areas or gauge the effectiveness of this customer service effort. For example, DFAS did not systematically record the nature of the calls to the toll-free number. According to DFAS data, this number, staffed by 30 DFAS employees, received over 15,000 calls in June 2004. By monitoring the types of calls and the nature of the problems reported, important information could have been developed to help target areas where training or improved guidance may be warranted. Further, DFAS had not established performance metrics for its call takers in terms of the effectiveness of resolved cases or overall customer service. Although Army regulations specify the responsibilities of soldiers, they do not require that soldiers be trained on travel entitlements and their role in the travel reimbursement process. Some of the Army Guard soldiers that we spoke with told us that they had received either inadequate or no training on travel voucher preparation and review. In addition, a DFAS CTO official told us that the on-the-job-training provided to its new personnel in early 2003 initially proved to be inadequate. Army Guard soldiers in our case studies told us that they asked DFAS representatives or used the Internet in attempts to find, interpret, and apply DFAS guidance, which by itself proved to be insufficient and required many trial- and-error attempts to properly prepare travel vouchers. As a result, many soldiers did not receive their travel payments on time. Army Guard soldiers. Army Guard soldiers in our case studies told us that they were confused about their responsibilities in the travel voucher reimbursement process because they had not been sufficiently trained in travel voucher processes related to mobilization. For example, prior to September 11, 2001, most travel guidance addressed the criteria for single trips or sequential trips and was not always clearly applicable to situations in which Army Guard soldiers could be authorized short intervals of travel for temporary duty at different locations within their longer term mobilization. This “overlapping travel” proved to be problematic for Army Guard soldiers trying to understand their travel voucher filing requirements and travel computation office examiners responsible for reviewing travel vouchers. In addition, we found indications that some soldiers were not aware of DOD’s requirement to complete a travel voucher within 5 days of the end of travel or the end of every 30-day period in cases of extended travel. For example, as shown in figure 5, in our selection of 139 vouchers, 99 (71 percent) of the Army Guard soldiers did not meet the 5-day requirement. Fifty-two Army Guard soldiers submitted their vouchers more than 1 year late. Of the 59 Army Guard soldiers that we could locate and interview, 23 said that they lacked understanding about procedures, or lacked knowledge or training about the filing requirements. Eight Army Guard soldiers said that they procrastinated or forgot to file their travel vouchers on time. The remaining 28 said that they could not remember anything about the specific voucher we asked about or did not respond to our inquiries. DFAS CTO personnel. DFAS CTO also had challenges training its examiner staff. The increase in mobilizations since September 11, 2001, and resulting increase in travel voucher submissions put a strain on DFAS CTO’s ability to make prompt and accurate travel reimbursements to Army Guard soldiers. As discussed previously, DFAS CTO hired more than 200 staff from October 2001 through July 2003, which brought the total number of staff to approximately 240. The training of these new employees was delivered on-the-job. Training time depended on the individual and type of work. For example, according to a DFAS CTO official, it took from 1 to 3 months for a voucher examiner to reach established standards. The DFAS CTO official told us that, in some cases, on-the-job training proved to be inadequate and contributed to travel reimbursement errors during this period. Our work indicated that mistakes by DFAS CTO contributed to reimbursement problems. For example, our California case study indicated that 33 soldiers were initially underpaid a total of almost $25,000 for meals, lodging, and incidental expenses when personnel at DFAS CTO based travel cost calculations on an incorrect duty location and a corresponding incorrect per diem rate. Although these soldiers eventually received the amounts they were due, the corrections took months to resolve. The lack of integrated and automated systems results in the existing inefficient, paper-intensive, and error-prone travel reimbursement process. Specifically, the Army does not have automated systems for some critical Army Guard travel process functions, such as preparation of travel vouchers, SNAs, and TCS orders, which precludes the electronic sharing of data by the various travel computation offices. In addition, system design flaws impede management’s ability to comply with TTRA, analyze timeliness of travel reimbursements, and take corrective action as necessary. The DOD Task Force to Reengineer Travel stated in a January 1995 report that the travel process was inefficient because systems involved with travel authorizations were not integrated with systems involved with travel reimbursements. Similarly, as we have reported and testified, decades-old financial management problems related to the proliferation of systems, due in part to DOD components receiving and controlling their own information technology investment funding, result in the current fragmented, nonstandardized systems. Lacking either an integrated or effectively interfaced set of travel authorization, voucher preparation, and reimbursement systems, the Army Guard must rely on a time-consuming collection of source documents and error-prone manual entry of data into a travel voucher computation system, as shown in figure 6. For example, if the system that created the mobilization order, the Automated Fund Control Order System (AFCOS), interfaced with the travel voucher computation system, a paper copy of the mobilization order would not be necessary because it would be electronically available. In turn, a portion of Army Guard and Army Reserve vouchers returned by DFAS CTO to soldiers because of these missing orders—a significant problem as discussed previously—could have been eliminated. Further, the lack of an integrated travel system and consequent “workarounds” increase the risk of errors and create the current inefficient process. As noted previously, several separate WINIATS systems at DFAS and the USPFOs can process travel vouchers for mobilized Army Guard soldiers. These databases operate on separate local area networks that do not exchange or share data with other travel computation offices to ensure travel reimbursements have not already been paid. Instead, as shown in figure 6, multiple WINIATS systems transmit data to the DFAS Operational Data Store (ODS)—a separate database that stores disbursement transactions. As a result, when a soldier submits a voucher, voucher examiners must resort to extraction and manual review of data from ODS. Next, voucher examiners research and calculate previous payments—advances or interim payments—made by other Army WINIATS systems. This information is then manually entered into WINIATS for it to compute the correct travel reimbursement for the current claim. In addition to being time consuming, this manual workaround can also lead to mistakes. For example, a Michigan soldier was overpaid $1,384 when two travel computation offices paid him for travel expenses incurred during the same period in August and September 2002. This overpayment was detected by DFAS CTO when the soldier filed his final voucher in August 2003. DOD lacks an automated system for preparing travel vouchers, which hinders the travel reimbursement process. As shown in figure 6, soldiers manually prepare their paper travel vouchers and attach many paper travel authorizations and receipts and distribute them via mail, fax, or e-mail to one of the travel computation offices. The lack of an integrated automated system increases the risk of missing documents in voucher submissions, which results in an increased number of vouchers rejected and returned by DFAS CTO. In addition, the Army currently lacks an automated centralized system to issue uniquely numbered and standard formatted SNAs regarding housing and dining facilities for mobilized soldiers. The lack of automated centralized standard data precludes electronic linking with any voucher computation system and the reduction of paperwork for individual soldiers, as they must obtain and accumulate various paper authorizations to submit with their vouchers. Further, the Army lacks an automated system for producing TCS orders. As illustrated at the top of figure 6, the various mobilization stations use a word processing program to type and print each individual TCS order to move a soldier to such places as Afghanistan and Iraq. Similar to the process for SNAs, mobilization stations maintain separate document files for each TCS order issued. The absence of a standard automated system used by each of the mobilization stations prevents the Army from electronically sharing TCS data with other systems, such as a voucher computation system. Consequently, the process will remain vulnerable to delays for returned voucher submissions as mobilized Army Guard soldiers continue to receive paper SNAs and TCS orders. Finally, even if the Army automates the TCS, SNA, and voucher preparation processes, as discussed previously, these new automated systems would need to be either integrated or interfaced with a voucher computation system to decrease the amount of time from initiation of travel to final settlement of travel expenses. We found that many Army Guard USPFOs did not populate key data fields in WINIATS as directed by DFAS Indianapolis. As a result, complete and accurate information was not available for a variety of management needs. For example, dates such as the voucher preparation date, supervisor review date, and the travel computation office receipt date, are key in providing DOD management with the information necessary to comply with TTRA, which requires DOD to reimburse soldiers for interest and fees when travel vouchers are paid late. In addition, these dates are essential in providing management with performance information that can help DOD improve its travel reimbursement process. Our analysis of 622,821 Army Guard travel voucher transactions filed from October 1, 2001, through November 30, 2003, and processed by DFAS CTO and the USPFOs found that at least one of these key dates was not recorded in ODS for 453,351, or approximately 73 percent, of the transactions. In cases in which the key dates necessary to perform the evaluation were being captured, incorrect entries were not detected. A WINIATS representative told us that the system was not designed with certain edit checks to detect data anomalies such as those caused by erroneous data entry. We found that 52 of 191 in our nonrepresentative selection of travel vouchers filed by soldiers had incorrect dates recorded in ODS (e.g., the date of supervisory review predated the date of travel ended by nearly a year) and that these data entry errors were not detected. Without system edit checks to detect data anomalies, the accuracy and reliability of the data are questionable, and consequently, management cannot carry out its oversight duties. Although DOD recognized the need to improve the travel reimbursement process in the 1990s and has been developing and implementing DTS, this system is currently not able to process mobilized travel authorizations (e.g., mobilization orders, TCS orders, and SNAs) and vouchers and, therefore, does not provide an end-to-end solution for paying mobilized Army Guard soldiers for travel entitlements. Furthermore, DFAS auditors have reported additional problems with DTS. Given DOD’s past failed attempts at developing and implementing systems on time, within budget, and with the promised capability, and that the effort has already been under way for about 8 years, it is likely that the department will be relying on the existing paper-intensive, manual system for the foreseeable future. At the end of fiscal year 2003, DOD reported investing about $288 million in DTS. In 2003, Program Management Office-Defense Travel System (PMO-DTS) estimated an additional $251 million was needed for DTS to be fully operational at the end of fiscal year 2006, resulting in an estimated total development and production cost of over 10 years and $539 million. This cost estimate does not include deploying DTS to the majority of the Army Guard USPFOs. Although the Army Guard supplies most of the mobilized soldiers in support of the global war on terrorism, DTS deployment to the 54 USPFOs is not scheduled to begin until fiscal year 2006. The Army is expected to fund the majority of the costs to field the program to the USPFOs, where mobilized Army Guard travel begins. The DTS total life cycle cost estimate, including the military service and Defense agencies, is $4.39 billion. While DTS purports to integrate the travel authorization, voucher preparation, and approval and payment process for temporary duty (TDY) travel, it does not integrate travel authorizations and reimbursements for mobilized Army Guard soldiers. DOD officials have stated that currently DTS cannot process mobilized Army Guard travel reimbursements involving various consecutive and/or overlapping travel authorizations. DOD officials acknowledged that DTS would not produce the various travel authorizations related to mobilization travel, because DOD is presently designing a pay and personnel system, the Defense Integrated Military Human Resources System (DIMHRS), which will accomplish this task. DOD’s current strategy is for DTS to electronically capture the travel authorization information from DIMHRS, after which a soldier would use DTS to prepare and submit a travel voucher. This would require that DIMHRS have the capability to electronically capture the various authorizations applicable to Army Guard travel, such as mobilization and temporary change of station orders, and that SNAs are generated from a standard, automated system that can effectively interface with DTS. DOD officials do not plan to implement DIMHRS at the Army Guard until March 2006. As a result, the timing and ability of the Army Guard to process mobilization travel vouchers through DTS appears to hinge on the successful development and implementation of DIMHRS and its interface with DTS. DTS is not being designed to identify and calculate travelers’ late payment interest and fees in accordance with TTRA. As discussed earlier in this statement, DOD’s current travel computation system does not automatically identify and calculate the TTRA late payment interest and fees. Furthermore, no controls are in place to ensure that the manual calculation is performed and that the interest and fee amounts are entered into the system for payment. According to DTS officials, DOD has not directed that DTS be designed to include such a feature. As a result, as currently designed, DTS provides no assurance that late payment interest and fees will be paid to travelers as required pursuant to TTRA. A DFAS Kansas City Statistical Operations and Review Branch report identified several significant problems with the current DFAS implementation. Specifically, for the first quarter of fiscal year 2004, DFAS reported a 14 percent inaccuracy rate in DTS travel payments of airfare, lodging, and meals and incidental expenses. This report cited causes similar to those we identified in the areas of traveler preparation of claims and official review of claims. In addition to these deficiencies, DFAS noted errors in DTS calculations for meals and incidental expenses. Another DFAS Internal Review report, dated June 15, 2004, indicated that improvements were needed in DTS access controls to prevent or detect unauthorized access to sensitive files. DFAS Internal Review reported that because PMO-DTS had not established standard user account review and maintenance procedures, DTS is vulnerable to unauthorized individuals gaining access to the system and confidential information, resulting in potential losses to DOD employees and the government. The report also noted that DTS was not adequately retaining an audit trail of administrative and security data, leaving management unable to investigate suspicious activities or research problem transactions. DOD, the Army, the National Guard Bureau, and DFAS reported several positive actions during the course of our work that, if implemented as reported, should improve the accuracy and timeliness of travel reimbursements to Army Guard soldiers. Because these actions were relatively recent, we could not evaluate their effectiveness. For example, DFAS officials also told us that they have taken several steps to reduce the number of vouchers being returned to the soldiers due to missing signatures and missing mobilization orders. DFAS and the National Guard Financial Services Center—a field operating agency of the Chief, National Guard Bureau, that performs selected financial services— entered into a Memorandum of Agreement effective February 2004 whereby DFAS will obtain the assistance of the National Guard to address problems with certain vouchers that would otherwise be returned to soldiers. According to DFAS CTO data, since the implementation of the agreement through the end of fiscal year 2004, 13,523 travel vouchers were coordinated with the National Guard in this manner rather than initially being sent back to the soldiers for correction. In the human capital area, DFAS CTO enhanced its training program for voucher examiners. For example, DFAS CTO used computer-based training to provide new personnel an initial overview of WINIATS and voucher computation procedures. In addition, a DFAS CTO official told us that a 40-hour course, which was designed specifically to address the types of vouchers received by DFAS CTO, has been established to train new employees. In addition, to help ensure that the Army Guard receives timely and accurate travel reimbursements, other immediate steps are needed to mitigate the most serious problems we identified. Accordingly, in our related report (GAO-05-79), we made 19 short-term recommendations to the Secretary of Defense to address weaknesses we identified that included the need for (1) mobilization and related travel orders to clearly state meal entitlements, (2) standardization of the form and content of SNAs for contingency operations, and (3) appointment of an ombudsman with accountability for resolving problems Army Guard soldiers encounter at any point in the travel authorization and reimbursement process. We also made 4 recommendations as part of longer term initiatives to reform travel, pay, and personnel systems, including the need to integrate or interface automated travel vouchers, SNAs, TCS orders, mobilization orders, and other relevant systems. In its comments on a draft of our companion report, DOD agreed with 21 of our 23 recommendations and outlined its actions to address the deficiencies noted in our report. DOD partially concurred with 2 recommendations regarding the need for an automated, centralized system for SNA per diem authorizations and the need for DTS to include capabilities to identify, calculate, and pay late payment interest and fees required pursuant to TTRA. Due to the financial burdens on the affected soldiers documented in our report, we continue to believe that DOD should implement measures to resolve these matters both on an interim and long-term basis. As Army Guard soldiers heed the call to duty and serve our country in vital and dangerous missions both at home and abroad, they deserve nothing less than full, accurate, and timely reimbursements for their out-of-pocket travel expenses. However, just as we recently reported for Army Guard and Reserve pay, our soldiers are more often than not forced to contend with the costly and time-consuming “war on paper” to ensure that they are properly reimbursed. The process, human capital, and automated systems problems we identified related to Army Guard travel reimbursement are additional examples of the broader, long-standing financial management and business transformation challenges faced by DOD. Similar to our previously reported findings for numerous other DOD business operations, the travel reimbursement process has evolved over years into the stove- piped, paper-intensive process that exists today and was ill-prepared to respond to the current large and sustained mobilizations. Without systematic oversight of key program metrics, breakdowns in the process remain unidentified and effective controls cannot be established and monitored. Finally, DOD’s long-standing inability to develop and implement systems solutions on time, within budget, and with the promised capability appears to be a critical impediment in this area. The problems we identified with DOD’s longer term automated systems initiatives—DIMHRS and DTS— raise serious questions of whether and when mobilized soldiers’ travel reimbursement problems will be resolved. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-9095 or kutzg@gao.gov. Staff making key contributions to this report include Paul S. Begnaud, Norman M. Burrell, Mary Ellen Chervenic, Francine M. DelVecchio, Lauren S. Fassler, Dennis B. Fauber, Wilfred B. Holloway, Patty P. Hsieh, Charles R. Hodge, Jason M. Kelly, Stephen Lipscomb, Julia C. Matta, Sheila D. Miller, John Ryan, Bennett E. Severson, Robert A. Sharpe, Patrick S. Tobo, and Jenniffer F. Wilson. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony outlines (1) the impact of the recent increased operational tempo on the process used to reimburse Army Guard soldiers for travel expenses and the effect that travel reimbursement problems have had on soldiers and their families; (2) the adequacy of the overall design of controls over the processes, human capital, and automated systems relied on for Army Guard travel reimbursements; (3) whether the Department of Defense's (DOD) current efforts to automate its travel reimbursement process will resolve the problems identified; and (4) other DOD actions to improve the accuracy and timeliness of Army Guard travel reimbursements. Mobilized Army Guard soldiers have experienced significant problems getting accurate, timely, and consistent reimbursements for out-of-pocket travel expenses. These weaknesses were more glaring in light of the sustained increase in mobilized Guard soldiers following the terrorist attacks of September 11, 2001. To its credit, the Defense Finance and Accounting Service (DFAS) hired over 200 new personnel to address travel voucher processing backlogs and recently upgraded their training. However, Guard soldiers in our case study units reported a number of problems they and their families endured due to delayed or unpaid travel reimbursements, including debts on their personal credit cards, trouble paying their monthly bills, and inability to make child support payments. The soldier bears primary responsibility for travel voucher preparation, including obtaining paper copies of various types of authorizations. DFAS data indicate that it rejected and asked soldiers to resubmit about 18 percent of vouchers during fiscal year 2004--a churning process that added to delays and frustration. Also, existing guidance did not clearly address the sometimes complex travel situations of mobilized Army Guard soldiers, who were often housed off-post due to overcrowding on military installations. Further, DOD continued to be noncompliant with a law that requires payment of late payment interest and fees when soldiers' travel reimbursements are not timely. With respect to human capital, GAO found a lack of oversight and accountability and inadequate training. Automated systems problems, such as nonintegration of key systems involved in authorizing and paying travel expenses and failure to automate key processes, also contributed to the inefficient, error-prone process. DOD has been developing and implementing the Defense Travel System (DTS) to resolve travel-related deficiencies. However, DTS will not address some of the key systems flaws. For example, DTS is currently not able to process mobilized soldier travel authorizations and vouchers and identify and calculate late payment interest and fees.
The WOTC is intended to encourage employers to hire individuals from eight targeted groups that have consistently high unemployment rates. The targeted groups are individuals in families currently or previously receiving welfare benefits under the Temporary Assistance for Needy Families (TANF) program or its precursor, the Aid to Families With Dependent Children (AFDC) program; veterans in families currently or previously receiving assistance under a food stamp recipients—aged 18 through 24 years—in families currently or previously receiving assistance under a food stamp program; youth—aged 18 through 24 years—who live within an empowerment zone youth—aged 16 and 17 years—who live within an empowerment zone or enterprise community and are hired for summer employment only; ex-felons in low-income families; individuals currently or previously receiving Supplemental Security Income; and individuals currently or previously receiving vocational rehabilitation services. Additional eligibility criteria apply to these groups. For example, welfare recipients must have received AFDC or TANF benefits for any 9 months during the 18-month period ending on the hiring date in order to be eligible for the program. The amount of tax credit that employers can claim under this program depends upon how long they retain credit-eligible employees and the amount of wages they pay to WOTC-certified employees. Employers who retain certified employees for at least 120 but less than 400 hours qualify for a credit of 25 percent of up to $6,000 in wages, for a maximum credit of $1,500. Employers who retain certified employees for 400 hours or more qualify for a credit equal to 40 percent of up to $6,000 in wages, for a maximum credit of $2,400. The credit is calculated using the actual first year wages paid or incurred. Employers must reduce their tax deductions for wages and salaries by the amount of the credit. In addition, as part of the general business credit, the WOTC is subject to a yearly cap. However, excess WOTC can be used to offset tax liabilities in the preceding year or in any of 20 succeeding years. The WOTC was first authorized in the Small Business Job Protection Act of 1996 to improve upon and replace a similar, expired program—the Targeted Jobs Tax Credit program. The WOTC was designed to mitigate some shortcomings that had been identified in the previous credit program—specifically, that it gave employers windfalls for hiring employees that they would have hired anyway and that too many credit- eligible employees left their jobs before they received much work experience. Some target groups were reformulated with the intention of focusing narrowly on those who truly need a credit for firms to risk hiring them. In addition, the minimum employment period for receiving the higher rate of credit was lengthened. The WOTC became effective beginning in October 1996 and has since been reauthorized. It is due to expire in December 2001. In fiscal year 1999, 335,707 individuals were certified as members of the targeted groups, making their employers eligible for the credit if the workers remained on the job for at least 120 hours. Individuals in the welfare target group made up 54 percent of the individuals certified. Youth in the food stamp target group made up another 20 percent of the individuals certified. The other six target groups each accounted for 1 to 8 percent of the remaining certifications. Federal and state agencies share responsibility for administering the WOTC program. The Department of the Treasury, through the Internal Revenue Service (IRS), is responsible for the tax provisions of the credit. The Department of Labor, through the Employment and Training Administration, is responsible for developing policy and program guidance and providing oversight of the WOTC program. In addition, the Department of Labor awards grants to states for administering the eligibility determination and certification provisions of the program. State agencies verify and report to the Department of Labor on state certification activities. All 50 states, the District of Columbia, Puerto Rico, and the U.S. Virgin Islands participate in the program. Neither the Department of the Treasury nor the Department of Labor regulations require these agencies to take any actions regarding displacement or churning. The State of New York and the Department of Labor have undertaken studies that may have findings relevant to whether employers engage in displacement or churning practices. The New York study, which was issued in 1998, concluded, among other things, that employer windfalls from churning employees are minimal. This conclusion was based on analysis of state WOTC and Wage Reporting databases with records on 12,609 individuals in New York covering the fourth quarter of 1996 through the first quarter of 1998. The study did not address displacement. The Department of Labor study is ongoing, so its results are not yet available. The study is using in-depth interviews with 16 employers who hire a large number of employees under the WOTC program to examine the hiring, retention, and career advancement experiences of WOTC employers and employees. To obtain information on the characteristics of employers, we analyzed national tax data from the IRS’ Statistics of Income Division for 1997, the most recent year that data were available, and state WOTC data from agencies in California and Texas for 1997 through 1999. To obtain information relating to the extent of displacement and churning, we surveyed a stratified probability sample of employers who have participated in the WOTC program in California and Texas. The participating employers that we surveyed are those with repeated and recent experience in the program in that they hired at least one WOTC employee in 1999 and hired at least one WOTC employee in another year. Our sample is projectible to the entire population of 1,838 employers in California and Texas who met these hiring criteria. For information relating to churning, we also analyzed WOTC and unemployment insurance data for these states. With these data, we determined the total earnings and length of employment of WOTC- certified employees and examined this information for evidence concerning the extent and likelihood of churning. For additional information relating to displacement, we analyzed national employment data in the Commerce Department’s Current Population Survey (CPS) for 1995 through 1999. We used the CPS data to estimate employment rates for members of groups targeted by the credit and members of groups not targeted by the credit but who may substitute in employment for target group members. The absence of a centralized database containing the necessary detailed information precluded a nationwide survey of employers and analysis of employment practices. We chose California and Texas because they are among the states that certified the largest number of employees to participate in the WOTC program in fiscal year 1999, have electronic databases of their WOTC program data, and provided a somewhat geographically diverse population. Together, California and Texas certified about 12 percent of WOTC- eligible individuals in fiscal year 1999, ranking them second and fifth, respectively, in WOTC certifications for that fiscal year. When reporting our estimates derived from the sample and our analysis of program and unemployment insurance data, we combined data from both states because the results in the two states were similar. Furthermore, the confidence intervals for all point estimates in the letter of this report are no more than 10 percentage points on either side of the estimate. Our survey and state agency data pertain only to participating employers in California and Texas. However, to assure ourselves that our findings are likely to apply to WOTC employers in the rest of the nation, we examined the federal laws and regulations related to the credit, surveyed state administrators responsible for the credit, and analyzed the data on participating employers. The federal tax benefits offered by the WOTC are the same across all states. Therefore, we have no reason to believe that employers in California and Texas respond differently to these incentives than employers in other states. We spoke to the officials who were responsible for administering the WOTC program in all 50 states, and they all confirmed that their states made no effort to either encourage or discourage displacement or churning. From the participating employer data, we determined that employers who operate in multiple states account for most of the WOTC hires in California and Texas. Moreover, we found no differences relevant to churning and displacement between employers in California and Texas in the results of our survey and agency data analyses, suggesting that our conclusions would be generalizable to employers in other states as well. We did not evaluate how effective or efficient the WOTC has been in increasing the employment and earnings of target group members. To do this, we would have had to determine the extent to which the credit caused employers to hire workers that they would not otherwise employees’ experience with WOTC employers increases their current and employers received “windfall” credits for employees whom they would have hired anyway. We did not address any of those issues in this report. We did not verify the state and federal databases we used. However, agreements between the Department of Labor and state WOTC offices require the states to conduct audits of the accuracy of their WOTC records. A review of studies of the accuracy of unemployment insurance data, which was conducted for the National Research Council, concluded that the data appear to be accurate. The study notes that employers are required by law to report the data and that intentional inaccuracies are subject to penalties. This same review of studies found that the CPS data are a valuable source of information on the national low-income population, with broad and fairly accurate measures of income. However, the study noted that sample sizes may be small for some subpopulations (e.g., welfare recipients in particular states), and the percentage of some subpopulations covered by the survey appears to have declined modestly in recent years. The tax data from IRS’ Statistics of Income Division undergo numerous quality checks but do not include information from amended tax returns. We conducted our review from January 2000 through December 2000 in accordance with generally accepted government auditing standards. Our scope, methodology, and the sources of the data we used are discussed further in appendix I. We requested comments on a draft of this report from the Department of Labor and asked cognizant agencies in California and Texas to review the draft’s discussion of their WOTC efforts. The comments are discussed near the end of this letter. Employers who were large in terms of gross receipts earned most of the credit reported in 1997, the latest year for which data were available. Data from the agencies that certify WOTC employees in California and Texas showed that a relatively small number of employers did most of the hiring in the WOTC program from 1997 through 1999. Employers’ participation in the program was greatly influenced by such factors as the opportunity to obtain the credit, address a labor shortage, and be a good corporate citizen. In 1997, nationwide, an estimated 4,465 corporations earned an estimated total of $134.6 million in tax credits. Approximately 66 percent of the credit was earned by corporations with gross receipts of $1 billion or more. Table 1 shows the amount of credit that businesses earned by amount of gross receipts. Most of the credit was reported by businesses engaged in nonfinancial services, such as hotel, motel, and other personal services, and retail trade. These industries accounted for 81 percent of the credit reported. Table 2 shows the credit amounts earned by businesses in each industry in 1997. The aggregate amount of WOTC earned by taxpayers is likely to have grown significantly between 1997 and 1999 because the number of WOTC certifications grew significantly nationwide over that period—from 126,113 to 335,707. However, based on the certification data we have from California and Texas, we believe that the percentage distribution of the credit by size of employer and by industry has not changed dramatically. The size distribution of employers measured by number of WOTC hires did not change significantly in either California or Texas during that period. The distribution of certifications by industry also changed little in Texas; we do not have industry information for California. Our analysis of WOTC certification data in California and Texas for 1997 through 1999 showed that a few employers did most of the hiring in the WOTC program. Employers who hired more than 100 WOTC-certified employees represented about 3 percent of all employers in the program but accounted for about 83 percent of all hires. About 65 percent of employers in the program made only one WOTC hire. The larger WOTC employers spent more time in the program. Employers who hired more than 100 WOTC-certified employees were in the program for an average of 10 or more quarters, while those hiring 5 or fewer employees were in the program for an average of less than 3 quarters. The larger WOTC employers also hired more frequently. Employers who hired in every year accounted for about 83 percent of total hires while representing about 8 percent of all employers. Table 3 shows the distribution of the number of employers, the number of WOTC-certified employees, and time in the program, by size of employers (in terms of WOTC-certified hires) for 1997 through 1999. The employers that we surveyed in two states reported that the opportunity to obtain a tax credit was by far the factor that most influenced their decisions to participate in the WOTC program, followed by the opportunity to address labor shortages and be a good corporate citizen. According to our survey, the opportunity to obtain the credit was the largest influence, with an estimated 85 percent of participating employers in California and Texas saying they were greatly influenced by this opportunity. Figure 1 shows the extent to which employers in the states we reviewed said that specific factors greatly influenced their participation in the program. Participation in the program appears often to have had support from high levels within the companies. For example, for an estimated 57 percent of California and Texas employers, the possibility of participating in the program was raised by someone inside the company rather than by an outside organization. In those situations, high-level management was responsible for raising the idea of participating in the WOTC program about three-quarters of the time, according to our survey-based projections. Displacement and churning are likely to be limited, if they occur at all, because, as our survey of employers in California and Texas indicates, most employers view these practices as having little or no cost- effectiveness. This view is consistent with the employers’ estimate that the credit offsets less than half the costs of recruiting, hiring, and training credit-eligible employees. Our employer survey also indicates that most vacancies filled by credit-eligible employees occur for reasons unrelated to displacement and churning, such as voluntary separations. Furthermore, our survey indicates that most employers change at least one recruitment, hiring, or training practice, which, studies suggest, may make these employers more likely to retain new hires. Our analysis of program and employment data from state agencies supports what we learned from the survey regarding the low probability of churning. These data show that employment rarely ends near the earnings level that yields the maximum credit, and employees earning the maximum are no more likely to separate than are other WOTC-certified employees. The agency data do not allow us to perform similar tests for the occurrence of displacement. However, displacement is less likely to occur when employers are increasing their workforce—as has been the case since the introduction of the credit— because they have less need to dismiss non-WOTC workers in order to hire WOTC workers. Most employers do not consider displacement and churning to be cost- effective employment practices. Based on our survey, we estimate that 93 percent of participating employers in California and Texas would agree that displacement is cost-effective to little or no extent. An estimated 93 percent of employers also hold that view regarding churning. Displacement and churning are not cost-effective if the cost of recruiting, hiring, and training a new employee exceeds the amount of WOTC that an employer expects to earn from that employee. Under those circumstances, the WOTC provides no incentive for that employer to dismiss an existing employee to hire a WOTC-certified one. According to our employer survey, on average, the tax credit offsets less than one-half (47 percent) of this cost. Furthermore, employers told us that it is important to reduce the turnover of WOTC-certified employees. Based on our survey, we estimate that for 71 percent of participating employers in those two states, retaining employees after the maximum tax credit has been secured is very important. An additional 20 percent would view retention of employees after the maximum tax credit is secured as somewhat important. For those employers who could tell us the reasons for the vacancies that were filled by WOTC-certified employees, an estimated average of 61 percent of such vacancies arose because the previous employees quit. On average, the next most frequent reasons for the vacancies were that the previous employees were terminated for cause and that the positions were newly created. Figure 2 shows the distribution by California and Texas employers’ responses regarding the reasons for vacancies. None of the reasons given were related to displacement or churning. About 85 percent of employers in California and Texas have changed a recruiting, hiring, or training practice to secure the WOTC and better prepare credit-eligible new hires, according to estimates that are based on employer-reported information from our survey. Furthermore, an estimated 43 percent of employers in these two states have changed their practices in all three of these areas. A 1999 study conducted by Jobs for the Future found that employers who successfully employed welfare recipients—which is the largest targeted group in the WOTC program— developed strategies to improve access, retention, and advancement of those individuals. The strategies used by employers in our survey included targeted recruitment; outreach and screening assistance from organizations that know and understand the targeted group; pre- employment training, such as training in communication skills; and mentors, among other strategies. These strategies are consistent with ones these researchers identified in other studies. Based on the results of our survey, we estimate that about two-thirds of participating employers in the two states changed at least one recruitment practice to secure the tax credit. The most frequent change in recruitment practice was that employers listed job openings with a public agency or partnership. An estimated 49 percent of participating employers in the two states took such an action. Figure 3 shows the extent to which participating employers changed recruitment practices to secure the credit. An estimated three-quarters of participating employers in the two states changed at least one hiring practice to secure the tax credit. Our survey indicated that the most frequent change in hiring practices was that employers began training their managers about the tax credit, with an estimated 66 percent of employers making that change. Figure 4 shows the extent to which participating employers changed hiring practices to secure the credit. Based on our survey, we estimate that about one-half of participating California and Texas employers changed at least one training practice to better prepare WOTC new hires. For example, an estimated 40 percent began providing mentors to their new hires. Figure 5 shows the extent to which participating employers changed training practices to secure the credit. Displacement is less likely to occur when employers are increasing their workforce because they have less need to dismiss non-WOTC workers in order to hire WOTC workers. Since the introduction of the credit in the last quarter of 1996, employment in the U. S. economy has grown robustly, even for low-skilled workers. Using the CPS data, we found that employment rates grew over the period for certain target group members and closely related nontarget group members that may substitute in employment for the target groups. For example, we estimated employment rates for welfare recipients in the CPS (those on welfare for 9 or more months in the previous year) who would be members of the group targeted by the credit. We also estimated employment rates for welfare recipients who would not be target group members (those on welfare less than 9 months of the previous year). The employment rate of the target group welfare recipients grew by 47 percent and nontarget welfare recipients by 12 percent from 1995 through 1999. Figure 6 shows employment rates over the period for members of the targeted and nontargeted welfare groups. Our analysis of the WOTC and unemployment insurance data in California and Texas showed that most certified employees do not earn enough income while working for WOTC employers for churning to make sense for those employers. Sixty- seven percent of certified employees separated from their employers after earning less than $3,000. Furthermore, only a relatively small number of certified employees earned incomes in the range where churning may be most likely to occur. Employers wishing to maximize their credit would retain WOTC employees until they had earned a total of $6,000, the maximum earnings eligible for the credit. Only about 7 percent of certified employees separated after earning incomes between $5,000 and $7,000 (a range of earnings within $1,000 of the credit maximizing level). If employers did not churn when employees reached this level of earnings, it seems less likely that they would churn at other levels of earnings. Figure 7 shows the percentage of employees separating after earning a given amount of income. In addition to determining the percentage of WOTC-certified employees who separated near the maximum earnings level, we also analyzed the effect of reaching the maximum earnings level on the likelihood of separation. We used a statistical technique to measure the likelihood of separation of WOTC-certified employees who reach the maximum earnings level in a given quarter relative to the likelihood of separation of WOTC-certified employees who do not reach the maximum. The technique that we used allows us to measure the effect on the likelihood of separation, while controlling for the effects of other employee characteristics, such as membership in a particular target group. The measured effect is, therefore, the net effect on the likelihood of separation (i.e., net of the effects of the other characteristics). Using this technique, our analysis showed that WOTC-certified employees who reach the maximum earnings in a given quarter (i.e., those whose cumulative earnings are between $5,000 and $7,000) are no more likely to separate from their WOTC employers than those employees who do not reach the maximum. In addition, the analysis showed that reaching the maximum has no effect on the likelihood of separation across most target groups. For example, members of the welfare target group are no more likely to separate in the quarter in which they reach the maximum than are members of other target groups who reach the maximum. Besides differences in target group membership, this analysis also controlled for differences in the occupation of employees, size of employers in terms of total employment, and other factors. This analysis is described in more detail in appendix III. The fact that an overwhelming majority of WOTC employers whom we surveyed in California and Texas considered displacement and churning to have little or no cost-effectiveness leads us to conclude that few of them would engage in these practices. Our analyses of WOTC employment data compiled by the two states provides further support for this conclusion with respect to churning. Further, although our survey and state agency data pertain only to participating employers in California and Texas, we believe that our conclusions regarding the occurrence of displacement and churning are likely to hold true in the remainder of the nation. The federal tax benefits offered by the WOTC are the same across all states. Therefore, we have no reason to believe that employers in California and Texas would be less responsive to those incentives than employers in other states. Moreover, employers that operate in multiple states account for most of the WOTC hires in California and Texas. We spoke to the officials who were responsible for administering the WOTC program in all 50 states and they all confirmed that their states made no efforts to either discourage or encourage displacement or churning. The fact that there were no differences relevant to displacement and churning between the results of our survey and agency data analyses for California and those for Texas also gives credence to the generalizability of our conclusions. The Department of Labor sent e-mail comments on a draft of this report to us on March 1, 2001. The Department of Labor made suggestions for clarifying information in the report. We modified the report where appropriate. The Department of Labor also stated that, given the wealth of evidence in our report indicating that displacement and churning are limited, our conclusions regarding the use of these practices could be stronger. We did not strengthen our characterization of the extent to which displacement and churning may be occurring because we believe that our conclusion appropriately reflects the strength of our methodology and resulting data. Agencies in California and Texas responsible for the WOTC program also reviewed our draft report regarding our description of the credit program in their state and our analysis of state data. The agencies stated that they had no suggestions for changes in our report. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we are sending copies of this report to Representative William J. Coyne, Ranking Minority Member, Subcommittee on Oversight, House Committee on Ways and Means; the Honorable Elaine L. Chao, Secretary of Labor; the Honorable Charles O. Rossotti, Commissioner of Internal Revenue; Mark Heilman, Chief, Job Services Division, California Employment Development Department; and John Carlson, WOTC Coordinator, Texas Workforce Commission. Copies of this report will be made available to others upon request. If you have any questions regarding this report, please contact me or James Wozny at (202) 512-9110. Key contributors to this report are acknowledged in appendix IV. The objectives of this report were to determine (1) the characteristics of employers who have participated in the WOTC program and (2) the extent, if any, to which employers have practiced displacement and churning. To obtain information on the characteristics of employers, we analyzed national tax data from the Statistics of Income Division of the Internal Revenue Service for 1997, the most recent year that data were available, and state WOTC data from agencies in California and Texas for 1997 through 1999. To obtain information relating to the extent of displacement and churning, we surveyed a stratified probability sample of employers who have participated in the WOTC program in California and Texas. Our survey of employers is discussed in more detail below. For information relating to churning, we also analyzed WOTC and unemployment insurance data for California and Texas. With these data, we determined the total earnings and length of employment of WOTC- certified employees and analyzed this information for evidence concerning the extent and likelihood of churning. Our methodology for this analysis is discussed in detail in appendix III. For additional information relating to displacement, we analyzed national employment data in the Commerce Department’s Current Population Survey (CPS) for 1995 through 1999. We used the CPS to estimate employment rates for members of groups targeted by the credit and members of groups not targeted by the credit but who may substitute in employment for target group members. To obtain information relating to the extent of displacement and churning, we identified participating employers from databases of employees who had applied for certification under the WOTC program. These databases are maintained by the state agencies in California and Texas that are responsible for determining the eligibility of employees as members of targeted groups and issuing certifications of eligibility to employers. Our desired survey population initially was managers who were hiring WOTC program employees nationwide. However, since this information is kept by each state office in various forms, it was not feasible to assemble a national sampling frame. Therefore, we used data from two of the five states with the largest numbers of WOTC employee participants in 1999. California and Texas were the two states of the five largest with manageable electronic databases of WOTC employees in 1999. We identified employers from these lists by their unique employer identification numbers (EIN), which are used by IRS. In order to have a population of employers with repeated and recent experience with the program, we included only those who had hired at least one certified employee, hired at least once in 1999, and hired at least once in 1997 or 1998. To identify employers from the databases of WOTC-eligible employees, we aggregated the employees according to their employer’s EIN. For the purposes of our sample, we defined “employer” as a unique EIN and selected a stratified random sample of 157 employers from the 975 total employers in California and 148 employers from the 863 total employers in Texas. The strata were defined by how many WOTC employees the employer hired. Because employers who had more than 100 WOTC hires accounted for 80 percent of the total WOTC hires, those employers hiring more than 100 employees were a separate stratum from those hiring between 2 and 100 WOTC employees. In this way, we were able to sample more employers with larger numbers of WOTC hires. Table 4 shows the breakdown by state and stratum of the number of employers in the population, the number selected into the sample, and the number who responded to the survey. In total, we sampled 305 employers and received responses from 225, for an overall response rate of 74 percent. In addition to the EINs for the employers associated with WOTC-eligible employees, the databases included limited information for a contact person. To try to ensure that our surveys reached the correct person at the employer site, we contacted every sampled employer by phone first. In this initial phone call, we explained the purpose of the survey, the kinds of questions we would be asking, and the location for which we were interested in obtaining information, and we asked for the name of the most appropriate respondent. Most initial contacts indicated that they were the most appropriate respondent or that they would receive the survey and forward it as necessary. Approximately 4 weeks after the initial mailout, we conducted a second mailout to those who had not yet responded. Approximately 4 weeks after that, we followed up with all remaining nonrespondents by telephone, reminding them that they had not responded and asking them to complete a shorter version of the questionnaire over the telephone. Because the survey results come from a sample, all results are estimates that are subject to sampling errors. These sampling errors measure the extent to which samples of these sizes and structure are likely to differ from the populations they represent. Each of the sample estimates is surrounded by a 95-percent confidence interval, indicating that we can be 95-percent confident that the interval contains the actual population value. Unless otherwise noted, the 95-percent confidence intervals for all percent estimates in the letter of the report do not exceed plus or minus 10 percentage points around the estimate. In addition to the reported sampling errors, the practical difficulties of conducting any survey may introduce other types of error, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted may introduce variability into our survey results that is difficult to measure. We conducted pretests of the survey to evaluate the wording of the questions. One particular source of nonsampling error unique to this survey involves the location to which that respondent’s answers refer. In some cases, the employer or EIN that we selected corresponded to a very large corporation, and our contact was in a hiring division located outside the state or local office of interest. In the initial phone calls, the location of interest was specified; however, the respondent may have responded with a different location in mind or may have been unable to take into account variation in hiring practices across several local offices. Careful pretesting of the survey did not uncover such issues, but this possibility may lead to additional variation in our survey results. Our survey and state agency data pertain only to participating employers in California and Texas. However, to assure ourselves that our findings are likely to apply to WOTC employers in the rest of the nation, we examined federal laws and regulations related to the credit, surveyed state administrators responsible for the credit program, and analyzed data on the participating employers. The federal tax benefits offered by the WOTC are the same across all states. Therefore, we have no reason to believe that employers in California and Texas respond differently to these incentives than employers in other states. We spoke to the officials who were responsible for administering the WOTC program in all 50 states, and they all confirmed that their states made no effort to either encourage or discourage displacement or churning. Moreover, employers that operate in multiple states account for most of the WOTC hires in California and Texas. We found no significant differences between employers in California and Texas in the results of our survey and agency data analyses, suggesting that our conclusions will be generalizable to employers in other states as well. We did not verify the state and federal databases we used. However, agreements between the Department of Labor and state WOTC offices require the states to conduct audits of the accuracy of state WOTC records. A review of studies of the accuracy of unemployment insurance data conducted for the National Research Council concluded that the data appear to be accurate. The review noted that employers are required by law to report the data, and intentional inaccuracies are subject to penalties. This same review of studies found that the CPS data are a valuable source of information on the national low-income population, with broad and fairly accurate measures of income. However, the study noted that sample sizes might be small for some subpopulations (e.g., welfare recipients in particular states) and the percentage of some subpopulations covered by the survey appears to have declined modestly in recent years. The sample size for the targeted and nontargeted groups in our analysis was sufficiently large that the confidence intervals for the estimated employment rates were no more than 6 percentage points on either side of the estimate. We concluded that the slight decline in coverage of welfare recipients is unlikely to affect our analysis of trends in employment rates over the period. As noted, we analyzed the tax data from IRS’ Statistics of Income Division. These data undergo numerous quality checks but do not include information from amended tax returns (i.e., revisions made by taxpayers themselves after their initial filings). To investigate whether reaching the maximum earnings in a given quarter affects the likelihood that employees will separate from their WOTC employers, we used state WOTC and unemployment insurance data on total earnings and duration of employment. We also used data from these sources on other employee characteristics, such as target group and occupation, and employer characteristics, such as total employment and the industry of the employer. The data were collected for 108,935 WOTC- certified employees and 5,347 employers in California and Texas for the years 1997 through 1999. We used the logistic regression model to quantify the effect of reaching the maximum earnings on the probability that the employee separates from the employer. We also used the model to estimate the effect of other employee characteristics, such as current wages (total earnings in a given quarter) and membership in a target group, on the probability of separation. The results of this analysis are presented as odds ratios in table 5. An odds ratio is a measure of relative risk of the occurrence of an event–in this case, the separation from an employer. The reported odds ratios indicate the effect of a particular characteristic (e.g., reaching the maximum earnings) on the probability of separation, controlling for the effects of other characteristics included in the analysis. The estimate of the effect, represented by the odds ratio, is the net effect of the characteristic (i.e., net of the effects of all other characteristics). If the characteristic increases the probability of separation, the odds ratio will be greater than 1, and if it decreases the probability of separation, the odds ratio will be less than 1. This interpretation is slightly different when the characteristics are different categories. An example of such a “categorical” characteristic is membership in a target group where the categories are welfare recipients, veterans, food stamp youth, and so on. In such cases, the analysis omits one of the categories (called the “reference group”) and tests whether the included categories have greater or less chance of separation relative to the omitted category. An odds ratio of greater than1 indicates greater probability of separation, while an odds ratio of less than 1 indicates less probability of separation. Table 5 shows that reaching the maximum earnings has no statistically significant effect on the odds that employees will separate from their employers. The variable called “maximum earnings” indicates the quarter in which an employee’s cumulative earnings are between $5,000 and $7,000. This interval includes $6,000 as its midpoint and indicates that reaching the maximum occurs in the quarter when the employee is within $1,000, more or less, of the maximum earnings eligible for the credit. The odds ratio for this variable is not significantly different from 1, meaning that employees whose earnings are within $1,000 of the maximum in a quarter are no more likely to separate than employees whose earning are outside this range. Table 5 shows that reaching the maximum has no effect on the likelihood of separation across most target groups as well. For example, members of the welfare target group are no more likely to separate in the quarter in which they reach the maximum than are members of other target groups who reach the maximum. We also used the logistic regression model to analyze the effect of reaching the maximum earnings separately for each state. The separate analysis permitted more characteristics of the employees and employers to be included because data on characteristics were not always available for both states. We analyzed the likelihood of separation in each state using only the characteristics in table 5, and then expanded the analysis to include the additional variable characteristics available in each state. This analysis shows that the conclusion about the effect of reaching the maximum on separation does not change when additional characteristics are added to the model. When variables indicating the occupation of the employee are added to the analysis in California, reaching the maximum earnings continues to have no effect on separation. When variables indicating the employer’s industry and size in terms of total employment are added to the analysis in Texas, reaching maximum earnings is significant, but employees reaching the maximum are still slightly less likely to separate. Specifically, they are 9 percent less likely to separate than are employees who do not reach maximum earnings. In addition to those named above, Kerry Dunn, Tre Forlano, Wendy Ahmed, Sam Scrutchins, Stuart Kaufman, Barry Seltser, and Cheryl Peterson made key contributions to this report.
In 1997, 4,369 corporations earned a total of $135 million in Work Opportunity Tax Credits (WOTC). The employers who earned most of the credit were large companies with gross receipts exceeding $1 billion and engaged in nonfinancial services and retail trade. GAO's analysis of state agency data for California and Texas from 1997 through 1999 showed that three percent of participating employers accounted for 82 percent of all hires of WOTC-certified workers. Many employers who participated in the tax credit program in those two states in 1999 say that, besides the opportunity to obtain the credit, their participation in the program was also greatly influenced by such factors as the need to address a labor shortage and the opportunity to be a good corporate citizen. The results of GAO's two state analysis indicate a low probability of replacing employees who were not eligible for the tax credit.
The United States is in the process of implementing the largest drawdown of its military forces since the end of the Vietnam conflict. Both the Congress and the Department of Defense (DOD) have established various targets and objectives to guide that drawdown in a balanced manner, in keeping with the nature of the military personnel system. The military personnel system is highly structured, with specific legal and regulatory requirements governing career advancement and service continuation. Within the parameters of those regulatory requirements, there exists a number of decision points where force shaping, design, and reduction actions can be taken by the services. Given the magnitude of downsizing underway, the Congress has provided expanded force shaping authorities and tools to DOD to facilitate downsizing, minimize involuntary separations, and preserve a balanced remaining force. The National Defense Authorization Act for Fiscal Year 1991 (P.L. 101-510) authorized end-strength levels totaling 1.613 million active duty military personnel as of September 30, 1995. This end strength represents an overall reduction of 561,217 positions, or nearly 26 percent, from the post-Vietnam peak strength of 2.174 million positions at the end of fiscal year 1987. Likewise, the administration, in annual budget submissions during recent years, has submitted out-year reduction targets. The January 1993 budget submission by the previous administration projected continuing force reductions through fiscal year 1999 at which time the services end-strength would be 1.568 million positions. The current administration has accelerated some previously planned personnel reductions but for now has projected future reductions only through fiscal year 1994; its goal for the end of fiscal year 1994 is 1.621 million positions, within 8,000 positions of the levels that the Congress had specified for the end of the following year, fiscal year 1995. Even so, DOD officials have indicated that further reductions will be made as the result of a recently completed “Bottom-up Review” of DOD needs and programs for fiscal years 1995-99; however, new reduction targets by fiscal year were not available as of early September 1993. “the conferees expect the Secretary of Defense to exercise prudent judgment in approving accession levels and force profiles by grade and years of service to guide the personnel strength reduction process in the military services as forces draw down over the next five years.” The conferees also stated their expectation that the military services would maintain the same relationship between officer and enlisted strengths as existed at the end of fiscal year 1990 in making active duty end-strength reductions in the future. DOD has also articulated policy objectives to reduce the military forces, maintain a high state of readiness, treat people fairly—both those who stay and those who leave—and ensure that careful consideration is given to how today’s decisions will affect tomorrow’s force. Some objectives reaffirmed congressional guidance, while others were added such as • protecting careerists near retirement, that is, protecting all qualified service members with 15 years of service or more until they are eligible for retirement; • establishing officer and enlisted accessions at levels necessary to sustain • using the drawdown as an opportunity, wherever possible, to balance officer and enlisted skills; and • establishing policies and procedures that are consistent with legislative guidelines for officer promotion opportunity and timing and, at the same time, controlling senior enlisted grade (top 5 grades) growth and maintaining promotion rates. The services are unique in terms of how they recruit and retain personnel, as well as in how they manage military careers. Within each service, the military personnel system is a centrally managed, “closed” system, meaning that persons recruited with no prior military service are generally brought in at entry level positions and progress through the ranks, contrasted with an “open” system such as the private sector, where new hires can be brought into an organization at various levels depending on the persons’ qualifications and experiences. The military essentially “grows its personnel from within.” Further, the military personnel system, which is predicated on maintaining a relatively young and vigorous work force, operates under an “up-or-out” policy in which members who fail to receive promotions within specific time frames are limited to how long they can remain in the service. Many persons join the military for the education and training benefits, particularly in the enlisted ranks, expecting to remain in the service for only a few years. Others decide to make the military a career; for those individuals, service continuation involves a periodic renewal or extension of their contracted service time. Accordingly, the military services lose significant numbers of personnel through a variety of loss programs each year, and therefore, must recruit enough new members to replace losses and ensure that the services will have enough well trained personnel to meet and sustain future years’ seniority, grade, and experience requirements. DOD data indicate that it has not been unusual for the services to replace more than 15 percent of their active duty personnel each year, even when the authorized end-strength levels are at a relatively “steady state,” neither increasing or decreasing significantly. The services essentially have two personnel systems for active duty military personnel, one for officers and one for enlisted personnel. The officer community is basically governed by the Defense Officer Personnel Management Act (DOPMA), while enlisted personnel management is governed by specific DOD regulations. DOPMA was enacted in 1980, establishing key parameters to managing the officer force with the intent of maintaining a continuous flow of officers through the military personnel system over a 20- to 30-year career path, based on normal attrition from voluntary resignations and retirements. The DOPMA legislation outlines standards and procedures relating to the appointment, promotion, separation and retirement of officers in the armed forces. For example, it stipulates how long officers of various ranks may remain on active duty beyond normal retirement eligibility at 20 years. It also prescribes the number of officers each service is authorized in each of the senior ranks from the 0-4 to the 0-6 pay grades, a number that will vary depending on officer end strengths authorized by the Congress. Central to DOPMA is the up-or-out promotion system in which officers generally advance in groups or cohorts originally determined by the year of their commissioning and compete for promotion against other members of the group at set years or zones of consideration for each paygrade. For example, an officer commissioned in 1983 would normally be considered for promotion to paygrade 0-4 in 1993, at the year 10 mark along with other officers in that year group or cohort. Under the DOPMA system, a select group of the 0-3 officers from a particular cohort can be considered for promotion “below the zone,” that is at year 9 (or earlier in selected instances) along with members from that year’s cohort of officers. However, most of the 1983 commissioned officers would have their greatest potential for promotion “in the zone” at year 10. Failing to be selected for promotion at year 10, the officers in this cohort could have an additional opportunity to be considered for promotion in the following year (or later in selected instances) “above the zone,” along with that year’s cohort. Failure to be selected for promotion then would mean these officers could be involuntarily separated. Thus, it is essential that the force at large be managed in such a manner that personnel are able to compete for, attain, and complete key assignments at the right points in their career progression. DOD Directive 1304.20, dated December 19, 1984, and DOD Instruction 1300.14, dated January 29, 1985, provide guidance and outline the basic parameters to enlisted force management. The directive sets constraints on the pay grade mix and career content (personnel with more than 4 to 6 years of service) of the enlisted force and establishes broad goals for elements such as recruitment, career progression and timing of promotions, service continuation, and military occupation specialty balance. The instruction requests the services to undertake specific enlisted force planning to incorporate long-range personnel goals. These plans provide the Office of the Secretary of Defense (OSD) with the means to monitor the services’ progress toward meeting the objectives of the enlisted personnel management system. Promotions for enlisted personnel at various pay grades are affected by time in grade or service requirements that vary by service, and may involve the use of selection boards. Promotions may also be affected by such things as tests, schools attended, evaluations (ratings), and awards. Theoretically, the services could simply increase or decrease recruiting, modifying it to take into consideration normal attrition and retirement levels, in order to increase or decrease authorized end-strength levels. However, given the structure and the nature of military career management programs, such actions, according to DOD officials, could create imbalances in the force in terms of age, skill levels, and right numbers of personnel by specialty area. Thus, constant management of the force at various career points is required to avoid these problems and stay within force management regulations. As previously discussed, the services must replace many personnel each year, even where no changes in authorized end strength are planned. Many personnel losses are voluntary and attributable to completion of initial periods of obligation, and decisions by service members over whether to continue and make the military a career choice. At the other end of the personnel pipeline, many voluntary losses each year are due to retirement decisions. Involuntary reductions can occur anywhere along this personnel pipeline due to misconduct. (App. II shows the services’ loss rates by specific categories for fiscal year 1992.) Unique to the military personnel system is a variety of military personnel requirements and actions, in addition to regular accessions and losses, that may be used to help shape the force, ensure balanced manning by rank and specialties, and preserve needed career advancement opportunities. These actions include: • using early release programs to permit individuals to separate in advance of their scheduled end of enlistment period; tightening quality controls, such as physical weight standards, governing those who will be permitted to reenlist; limiting the maximum number of years that members at a given rank may continue in the service before being denied re-enlistment opportunities; • selecting certain nonretirement eligible personnel to be involuntarily separated through use of formal reduction-in-force (RIF) boards; and • selecting certain retirement eligible personnel to retire before the normal mandatory time frame through use of formal Selected Early Retirement Boards (SERB). In completing action on the National Defense Authorization acts for fiscal years 1991, 1992, and 1993, the Congress authorized certain additional measures to induce downsizing, and minimize the adverse affects on individuals as they transition to civilian life. These actions included: • expanding reduction-in-force (RIF) authority to include certain officers previously exempt from RIF action related to the source of their commissions; • expanding authority for use of SERBs for officers; • reducing certain time in grade requirements for voluntary retirements at current grades among officers having already completed the 20 years of total service needed to retire; • extending lump-sum separation pay and transition assistance to enlisted personnel who are involuntarily separated, having completed 6 years or more of service—only officers were previously eligible to receive this benefit;• authorizing two special categories of separation pay to induce voluntary separations, the lump sum Special Separation Benefit (SSB) and the Voluntary Separation Incentive (VSI), to induce voluntary separations among those having completed 6 or more years of service at the time the legislation was enacted; and • providing, effective with fiscal year 1993, DOD with the authority to offer a 15-year retirement option for selected members of the military. Persons separating under SSB and VSI provisions are also entitled to the same transition assistance programs available to persons receiving pay under condition of involuntary separation. The Chairman of the former Subcommittee on Manpower and Personnel of the Senate Committee on Armed Services asked us to examine DOD’s implementation of military downsizing in accordance with legislative guidance and authorizations. We determined (1) what progress DOD has made toward meeting reduction targets, (2) how downsizing actions are affecting new recruiting or accessions, (3) what range of voluntary and involuntary reduction actions are being taken to meet downsizing objectives, (4) how downsizing is being accomplished across various groupings of officer and enlisted personnel by years of service and how this is affecting force profiles, and (5) what issues might be important to future reduction decisions. We reviewed congressional legislation, budget documents, manpower statistical data, and individual service personnel plans. We also interviewed appropriate officials at the following organizations: • Office of the Deputy Assistant Secretary of Defense, Personnel and Readiness, Military and Manpower Personnel Policy Directorate, and Personnel Support Policy and Service Directorate; • Office of the DOD Comptroller; • Department of the Air Force, Deputy Chief of Staff for Personnel, Directorate of Personnel Programs; • Department of the Army, Deputy Chief of Staff for Personnel, Directorate • Department of the Navy, Bureau of Naval Personnel, Officer Plans and Career Management Division, and Enlisted Plans and Career Management Division; and • Department of the Navy, Office of the Deputy Chief of Staff for Manpower and Reserve Affairs (Marine Corps), Manpower Plans and Policies Division. In completing this review, we made use of manpower and personnel data from budget documents and other data sets from the Office of the Secretary of Defense (OSD), the individual services, and the Defense Manpower Data Center (DMDC). In portraying historical data, fiscal year 1980 was chosen because it reflected the beginning point for the defense build-up of the 1980s; fiscal year 1987 was chosen because it reflected the post-Vietnam peak year of active duty end-strength levels just prior to the onset of recent downsizing activities. Our comparison of data showed inconsistences in common data sets among the services, OSD, and DMDC. We found inconsistencies within singular documents, such as budget justification documents submitted to the Congress, where the same information was presented in more than one section of the document. These inconsistencies were more prevalent when the data were related to future personnel actions, but also affected DOD reports summarizing prior year actions to some extent. Time did not permit a detailed examination of the services’ and DOD’s data systems to fully document the validity of the data and the bases for inconsistencies. Where discrepancies where noted, we conferred with DOD officials for their judgments as to the most appropriate data source. We attempted to use the best available information in all cases, recognizing that numbers used in this report may vary slightly from other reported sources. Thus, data presented in this report dealing with accessions and losses should be viewed as approximations, not final and absolute numbers. We have added special notes to tables used in this report to further highlight data limitations as warranted. We conducted our review from April 1992 to July 1993 in accordance with generally accepted government auditing standards. By the end of fiscal year 1993, DOD expects to have reduced its active duty force levels more than 446,000 positions below end-strength levels that existed at the end of fiscal year 1987, when the military was at its post-Vietnam peak of 2.174 million positions. At the same time, and possibly contrary to public perception, DOD is, and expects to continue, recruiting a large number of personnel each year in order to maintain and sustain a balanced force to meet future operational requirements. As a consequence, a much greater degree of personnel turnover is occurring than is generally recognized. Given the extent of personnel turbulence involved in downsizing and shaping the force, and uncertainties over future force levels, service officials indicate that flexibility and periodic reassessment are required in managing accession levels. At the end of fiscal year 1993, DOD end-strength levels are expected to be down to 1.728 million positions, a 21-percent reduction over fiscal year 1987’s end-strength levels; by the end of fiscal year 1994, DOD expects end-strength levels to be at 1.621 million, a reduction of 25 percent. Table 2.1 shows the reductions accomplished and expected by the services through fiscal year 1994. The largest of the reductions are occurring in the Army and the Air Force, which, by the end of fiscal year 1994, are slated to reduce their levels by 31 and 30 percent, respectively, over fiscal year 1987 end-strength levels; lesser reductions are occurring in the Navy and the Marine Corps, which are to reduce their end strengths by 18 and 13 percent, respectively, by the end of fiscal year 1994. Selected Reserve force levels will be reduced from 1.15 million in fiscal year 1987 to 1.02 million at the end of fiscal year 1994—an 11-percent reduction. (App. I more completely summarizes current planned changes in DOD force levels by service and active and reserve components from fiscal years 1980 through 1994.) However, as previously discussed, reductions are planned beyond fiscal year 1994, although complete out-year reduction targets have not yet been finalized by the administration. DOD’s recruiting plans for fiscal years 1993 and 1994 call for recruiting about 225,000 and 206,000 persons, respectively. Table 2.2 contrasts the levels of accessions in fiscal years 1987 and 1988 with actual accessions in fiscal year 1992 and those projected for fiscal years 1993 and 1994. DOD officials have reported greater difficulties in recruiting during this period of downsizing; some service officials have attributed these difficulties to a perception of some young people that the military is not recruiting during downsizing or a view that the military no longer affords as viable a career option as it once did. DOD officials also cite some early signs of reduced quality in recent recruits. In spring 1993, DOD officials reported that the number of new personnel with high school diplomas dropped DOD-wide during the first half of fiscal year 1993 to 94 percent, from a high of 99 percent in 1992; for the Army, the reduction was greater, down to 89 percent. However, an Army official expressed optimism that the rate could rise closer to the Army’s target of 95 percent during the remainder of the year, the prime time for recruiting new high school graduates. DOD officials subsequently reported in September 1993, that the number of new military personnel DOD-wide with diplomas during the first 9 months of fiscal year 1993 was 95 percent. While this level is still lower than the 99-percent level achieved in fiscal year 1992, it compares favorably with the 89-percent average achieved for the period 1980 through 1992. The Army’s current accession levels shown for fiscal year 1993 reflect a 7,000-reduction from planned levels at the beginning of the fiscal year. However, Army officials told us that the decreased accession levels were related primarily to faster paced reductions than originally planned, which resulted in fewer accessions being needed, and were not related to any decrease in quality of recruits. Projected accession levels for fiscal years 1993 and 1994 might be considered to be relatively high for a time of such significant downsizing; however, total accessions planned for fiscal year 1993 are down 34 percent in relation to fiscal year 1987’s level. Unless actual loss rates for the services each year are closely examined, it can be concluded that the number of persons leaving the services during these years of force downsizing is reflected simply in the changes in service end strength. Our March 25, 1992, testimony before a Subcommittee of the Senate Armed Services Committee noted that shaping the force requires a larger number of accessions and attritions than would be the case if the focus were primarily on limiting the recruitment of personnel to achieve end-strength reduction goals. Total numbers of persons leaving the services during fiscal years 1992 and 1993 were projected to be about 3 times the amount of net reductions in end strength. For fiscal year 1994, DOD expects to recruit about 206,000 new personnel and the departure of 313,000, for a net end-strength reduction of about 107,000 positions. Using beginning fiscal year 1994 end-strength levels as a base, DOD expects, during that fiscal year, to separate 19 percent of its people, recruit 13 percent new personnel, and achieve a 6-percent net end-strength reduction by the end of that year. The numbers of personnel entering and leaving the services during this period of downsizing, compounded by uncertainties and heightened anxiety levels over future careers of others who remain, add to what many military officials have characterized as a high degree of turbulence affecting their forces. While downsizing actions have of necessity resulted in significant personnel turnover levels and turbulence, they aggravate to some extent, already significant turnover levels associated with the services’ systems of rotating personnel from one assignment, unit, and location to another, including between the United States and overseas assignments. Our previous reports on Army and Marine Corps training have noted that senior leaders have described this personnel turbulence as one of the most significant problems affecting the Army’s ability to maintain a trained force. The Congress and DOD have provided guidance to the services in establishing accession levels. Some DOD guidance suggests a greater degree of precision is available in establishing fixed accession levels than the services have found practical during downsizing. However, with congressional authorizations for use of financial separation incentives to induce voluntary attritions, DOD encouraged the services to retain relatively high levels of accessions. It did so in order to build and sustain a more balanced force for the future, minimize the potential for skill imbalances and promotion stagnation, and protect career options for remaining personnel. DOD’s action also reflects budgetary decisions to increase the ratio of entry level to more senior personnel in the career force to reduce the cost of the force. The conferees to the fiscal year 1991 defense authorization act expressed an expectation that the Secretary of Defense would exercise prudent judgment in approving accession levels. OSD established the objective of programming accessions to a level not greater than that required to sustain out-year force levels. It also stated that accessions should not be programmed to less than 85 percent of the level required to sustain the out-year force levels. In establishing accession rates to sustain a future force, the services must examine historical trends in attrition and replacement rates, factor in the probabilities of how long individuals are likely to remain on active duty, and apply a mathematical formula to calculate replacement or accession rates per year. Service officials point out that such forecasting can be very imprecise without stable long-term force levels and retention patterns, two factors that have not been present during the current downsizing. Accordingly, service officials indicate that accession levels need to be reassessed frequently, based on changing conditions, and adjusted to replace losses and meet end-strength requirements. One such adjustment has already been alluded to regarding the change in fiscal years 1993 and 1994 accession levels due to the more rapid drawdown than previously planned. The services have used a number of authorities or “tools” to reduce and shape its force by various year groupings of officer and enlisted personnel. These tools have ranged from use of voluntary early release programs and financial separation incentives to induce separations to involuntary RIFs. The services vary in the extent to which they have experienced losses under each of these categories. (See app. II for a summary of losses occurring during fiscal year 1992 by service.) To achieve reductions in a balanced manner across pay grades and skill areas, the reductions have of necessity fallen along a spectrum ranging from voluntary, to induced, to involuntary reductions—the most adverse being RIFs. DOD has initiated a number of actions to facilitate voluntary separations for persons wanting to do so. The more significant actions have involved use of financial separation incentives and early release programs; additionally, beginning with this fiscal year, DOD has authority to offer early retirement for personnel with at least 15 years of service. The most publicly visible tools the services have used to achieve downsizing objectives have centered on the use of financial separation incentives—SSB, a lump-sum separation incentive, as well as VSI, a variable annuity payment. Both have been used to induce separations for persons whose eligibility is based on having completed between 6 and 20 years of service at the time the legislation became effective in December 1991. However, DOD data on fiscal year 1992 separations show that the median length of service for those persons separating under SSB and VSI programs was 11 years. Table 3.1 summarizes the number of persons by service separating under SSB and VSI during fiscal years 1992 and 1993, and summarizes projected SSB and VSI separations in fiscal year 1994. Our March 1992 testimony on military force downsizing pointed out the much greater long-term value of VSI to the recipient but also indicated that initial trends showed an overwhelming majority of persons were opting for the smaller value, but single lump-sum payment, of SSB; our analysis shows that this trend continues. Of those opting for the financial incentive separations in fiscal year 1992, 87 percent chose SSB. DOD officials expect this trend to continue. (App. III contrasts the differing values of the separation incentive pay options with involuntary separation pay and indicates the costs to DOD for fiscal year 1992.) Of all the services, the Army has made the greatest use of the separation incentives, reflecting the fact that it has the largest reductions of any service, both in terms of numbers and percentages. However, each of the services has used the incentives in varying degrees to achieve downsizing objectives as well as to shape its officer and enlisted forces. Army officials explained that the Army has offered the incentives to all eligible officers, except for those in the medical skills area. In some instances, the Army has offered the incentives to officers in certain year groups; this was predicated on the knowledge that if sufficient numbers of persons did not elect to accept the incentives, a RIF would be required. During fiscal year 1993, the Army targeted officers in pay grade 0-3 who were in year groups 1983 and 1984; in fiscal year 1994, it expects to target 0-3s in year group 1985. The Army will also offer the incentives to officers who fail their first board consideration for promotion to 0-4 during the respective fiscal years. The Navy did not begin using the incentives for its officers until fiscal year 1993, when it offered them to various officer communities within pay grades 0-3 and 0-4. In fiscal year 1994, the Navy plans to offer the incentives to various officer communities in pay grades 0-3 through 0-5. The Marine Corps originally targeted officers in overstrength specialties in pay grade 0-4 to improve promotion flows. In fiscal years 1993 and 1994, it has offered and expects to continue offering incentives to persons in the 0-3 pay grade as well. The Air Force has made offers to most of its 0-3s and 0-4s, except pilots. In fiscal year 1994, the Air Force is planning to target its 0-4s, including pilots. In terms of enlisted personnel, all of the services are using the incentives to help reduce personnel in overstrength skill areas and in other areas in some cases. The Army has offered the incentives to enlisted members in over-strength skills and to members who are subject to separation under changing policies limiting how long personnel can remain on active duty without being promoted. The Navy originally used the incentives to target mid-grade (pay grades E-5 and E-6) members in overstrength specialties, and in fiscal year 1994, it is expanding the eligibility to target overstrength specialties in pay grades E-4 through E-9. The Marine Corps has used the incentives to target enlisted members who were in skill areas no longer needed and to reduce overstrength areas. The Air Force initially offered the incentives primarily to mid-grade (pay grades E-4 and E-5) personnel who had more than 9 years of service and were in less critical skill areas, but for fiscal year 1994, it plans to target personnel in pay grade E-5. It also plans to open up the eligibility criteria incrementally, as necessary. The services’ use of these incentives accounted for 13 percent of overall service separations in fiscal year 1992, and their use is expected to account for 9 percent and 6 percent, respectively, in fiscal years 1993 and 1994. According to OSD, the decreasing numbers in fiscal years 1993 and 1994 reflect the fact that most persons interested in the incentives have taken them and that the base of eligible personnel is lessening. The base of eligible persons grows smaller each year since a service member must have had between 6 and 20 years of service as of December 5, 1991, when the legislation became law. Thus, the total population of potentially eligible personnel is decreasing yearly. Additionally, this legislation only authorizes the use of these incentives through fiscal year 1995. As of now, the Navy is projecting that its 0-3 officer population in the 1987 year group is in excess of requirements. Regardless of additional reductions in authorized end-strength levels resulting from the recent Bottom-up Review, the 1987 year group contains approximately 2,000 officers more than DOPMA 0-4 limitations will allow to be promoted without causing an imbalance. This means that around fiscal year 1996 the Navy will be faced with either implementing RIF actions or seeking temporary legislative waivers to DOPMA grade tables to permit the promotion of a greater number of personnel than would otherwise occur. An extension of authority for the incentives and the window of eligibility would offer another option for dealing with this situation; an option already used by the Army in fiscal years 1992 and 1993. Under early release programs, the services permit individuals to separate in advance of their scheduled end of enlistment period; the early releases generally occur in the same year that personnel are scheduled to separate, but there have been instances where they have occurred earlier. They are used most often to separate persons during their first term of enlistment, generally those who fall within 2 to 6 years of service. These programs have been used in recent years by all of the services, except for the Marine Corps (see table 3.2). Service officials report that they have had to make less use of the programs in fiscal years 1992 and 1993 than in earlier years, before the availability of financial separation incentives. However, service officials indicate that this is still an important tool to use as needed to help reduce and shape that portion of the force having less than 6 years of service; it can be a particularly attractive tool from a cost saving standpoint, since it can help reduce salary costs in the year in which it is used and does not involve any severance pay. In the National Defense Authorization Act for Fiscal Year 1993, the Congress authorized DOD to temporarily approve the retirement of military personnel with at least 15, but less than 20, years of service to further facilitate downsizing and avoid involuntary separations. DOD’s implementing guidance gives each of the services authority to prescribe criteria for eligibility for early retirement, including such factors as grade, years of service, and skill area. The guidance stipulates that the authority should be used to retire members who are excess to the services’ short- and long-term needs and that, if possible, the services should manage their programs so those members nearest 20 years of service are offered early retirement first. Authorizing legislation stipulates that retirement pay for those personnel leaving under this program will reflect a reduction of 1 percent for each year short of 20 years. Table 3.3 summarizes the planned use of this authority by each of the services. Each of the services indicates that it is still in the process of determining how extensively it will use this authority. However, this authority is generally being used to target overstrength skill areas and year groups starting with those close to 20 years of service. The Air Force and the Army have each tentatively targeted the majority use of the early retirement authority toward their enlisted ranks while the Navy is targeting primarily its officer force. The Marine Corps plans to use this authority only during fiscal year 1993 to separate 100 officers. While the various force management tools have been helpful in minimizing the need for involuntary separations, the services have still had to separate some personnel having less than 20 years of service under less than voluntary means, such as through formal RIF actions, to meet downsizing and force shaping objectives. Additional reductions have occurred through the use of SERBs to reduce the ranks of retirement eligible personnel. Other controls over service continuation have also been used to further reduce the force. In fiscal year 1992, the Army was the only service to separate personnel, those at the 0-4 level, through formal RIF actions; this included 244 officers. Early in fiscal year 1993, the Air Force separated 1,595 officers by RIF actions, principally affecting 0-3 level officers. The Army also anticipated using RIF actions in fiscal year 1993 but obtained enough voluntary attritions from repeated VSI and SSB offers among 0-3 officers in year groups 1983 and 1984. None of the services has used RIF procedures to date to separate enlisted personnel. However, the services can deny reenlistment to enlisted personnel without the need for RIF action. Each of the services has used available SERB authority to reduce the population of officers and enlisted personnel already eligible to retire. Although a SERB is not technically considered an involuntary separation, it is considered to be essentially that by the persons who are selected to retire under this formal board selection procedure. In fiscal year 1992, a total of 3,429 officer and enlisted personnel were formally selected for retirement under SERB procedures (see table 3.4), and the services project similar numbers in fiscal years 1993 and 1994 (see tables 3.5 and 3.6). The services have used other force shaping tools to facilitate downsizing, including tightening quality control standards for those persons who are allowed to continue in military service and limiting the maximum number of years that a member may serve at a given pay grade before being denied reenlistment rights. Each of the services has also tightened quality standards affecting the ability of military personnel to continue their careers at the conclusion of their initial periods of obligated service. Actions taken have included shifting from decentralized to centralized approval authority over reenlistments. In addition, greater attention is being given to physical fitness and weight standards and substance abuse. Service officials told us that tightened standards are more apt to affect enlisted personnel in their first enlistments than career personnel. They indicated that the number of persons in their first enlistment permitted to reenlist is also affected by a reduced need for certain skills. Numbers of persons affected by these controls are included within the tabulations of all persons leaving the services each year at the end of their term of service or contract and are not broken out separately (see app. II). Each of the services has reduced time frames or tenure standards governing how long career enlisted members may stay at a given pay grade before being denied reenlistment rights. • The Navy reduced the time that persons in enlisted grades E-6, E-7, and E-8 could serve without further promotion from 23 to 20, 26 to 24, and 28 to 26 years, respectively. • The Marine Corps changed its tenure rule for only one pay grade, E-7, reducing service time from 25 to 22 years. • The Air Force made changes in tenure rules for pay grades E-4 and E-6 through E-8; the most significant change being to reduce the authorized tenure for an E-4 from 20 to 10 years. • The Army has changed tenure rules for those in pay grades E-4 through E-8. Service officials indicate that these tightened standards have been responsible for facilitating the early departure of a number personnel; however, precise numbers are not available. Some of these persons retired, whereas others not eligible for retirement separated under financial separation incentive programs. DOD’s use of its various force shaping tools has helped to control, though not entirely contain, personnel growth in force profiles such as years of service and pay grades. These changing force profiles are part of a longer term trend dating to before the onset of downsizing efforts in the late 1980s. Table 4.1 aggregates these changing profiles at the DOD level for fiscal years 1980, 1987, and 1992. There are positive and negative aspects of such changes in force profiles affecting each of the services. In recent years, the military services have been widely recognized and cited by senior military leaders as being more capable and better trained than ever. One of the factors contributing to this, according to various military officials, has been the growth in experience levels of military personnel, gauged by length of service. This situation exists across various groupings of personnel by years of service ranging from those in their initial tours of duty to those eligible for retirement. Since fiscal year 1980, each of the services, to varying degrees, has experienced a continual growth in experience levels, particularly for enlisted personnel. This trend is most pronounced in terms of the decline in percentages of pre-career personnel. Table 4.2 shows the decline in enlisted personnel having less than 4 years of service for each of the services at the end of fiscal years 1980, 1987, and 1992. Similar though less dramatic changes have also occurred within the officer ranks of each of the services. Growth in service experience levels is seen in proportional increases in numbers of service members having more than 15, but less than 20, years of service. Table 4.3 shows the percentage of enlisted personnel having between 15 and 20 years of service for each of the services at the end of fiscal years 1980, 1987, and 1992. Increases have also been noted in the percentage of officer personnel with more than 15 years of service. Table 4.4 shows the percentage of officer personnel having between 15 and 20 years of service for each of the services at the end of fiscal years 1980, 1987, and 1992. As indicated in tables 4.3 and 4.4, the percentage of members near retirement has risen in all four services over the past 12 years. One reason for part of the increase is related to an overall trend toward a more experienced force since the all-volunteer force began in 1973. Until 1988, according to DOD officials, service members with more than 15 years had entered military service prior to the all-volunteer force system, which drafted a significant portion of the required accessions. Another reason is that, in recent years, the Congress and DOD have emphasized protecting members near retirement from involuntary separations. The percentages could stabilize or decline somewhat within the next 2 years as the services make use of new temporary authority to offer early retirement for personnel having completed between 15 and 20 years of service. All of the services, however, indicate that they are still in the process of determining how extensively they will use this new authority. While this move toward a more senior force has occurred in each of the services, the “youth to experience” mix has always differed significantly among the services. As high as the percentages are for services in fiscal year 1992, they are much less than they were at the end of fiscal year 1980. The high number of personnel with relatively few years of experience points to relatively high personnel turnover rates within the services. This is an important addition to the more frequently recognized turbulence involving the rotation of personnel from one unit or assignment to another, including to and from overseas assignments. Tradeoffs exist between a force that is more senior and a force that is more junior and relatively less experienced. Generally, service officials believe that the trend indicated by table 4.1 is one indicator of a better trained force than the one that existed prior to the all-volunteer force or during the mid- to late-1970s, a time often referred to as a period of the “hollow force.” OSD officials also share this view. However, they are also aware that, along with this growth in experience, there comes an increase in average personnel costs, and OSD has, during this drawdown, exerted influence on the services to maintain relatively high accession levels, thereby having the effect of slowing these recent trends. For example, while the relative size of the lower ranking enlisted population dropped steadily during the 1980s, that level of decline has slowed in recent years with the DOD average changing from 43.1 percent at the end of fiscal year 1991 to 42.7 percent at the end of fiscal year 1992. Since fiscal year 1980, the percentage of members with more than 20 years of service has increased moderately, more so for officer than enlisted personnel. Tables 4.5 and 4.6 show the percentages of enlisted personnel and officers having more than 20 years of service, for each of the services, at the end of fiscal years 1980, 1987, and 1992. While the percentage of enlisted personnel with more than 20 years of service increased slightly DOD-wide, the average increase for officers was more noticeable from fiscal year 1980 through fiscal year 1992, with the greatest increase occurring between fiscal years 1985 and 1988. However, to a certain extent, each of the services minimized the potential for greater growth during fiscal year 1992 by the use of SERBs, and each service expects to do so further in coming fiscal years, with the greatest emphasis on officer personnel. Congressional conferees, in providing force reduction guidance, indicated a desire to see the services maintain the same relationship between officer and enlisted ranks as existed at the end of fiscal year 1990. However, as shown in table 4.7, some moderate growth in the ratio of officers to enlisted personnel has occurred. The changes occurring since fiscal year 1990 are reflected in a longer term trend as indicated by the change between fiscal years 1980 and 1992. Some service officials believe that some shift in officer to enlisted ratios is inherent in downsizing. That is, officer to enlisted ratios typically rise during downsizing as reductions are made to the proportionally larger enlisted ranks and fall during a build-up as proportionately more enlisted personnel are added. Army officials also point to congressionally mandated increases in medical personnel manning levels that have required retention of more officer personnel. Changes have also been noted in the average pay grades of officer and enlisted personnel during downsizing, as well as being part of a longer term trend, evidenced by changes since fiscal year 1980. Table 4.8 shows changes in average enlisted pay grades for fiscal years 1980, 1987, and 1992. Table 4.8 and additional data we examined for each of the intervening years show an upward trend in average enlisted pay grades, with a 9.98 percent increase DOD-wide, since fiscal year 1980, with little differentiation in the degree of change occurring annually during the current downsizing period from earlier years. In comparison with enlisted personnel pay grades for these three fiscal years, a less significant change occurred in officer pay grades (see table 4.9). However, as with enlisted personnel, officer pay grades show little difference in the degree of annual change during the current downsizing period from earlier years. DOD’s guidance to the services in planning for downsizing stipulated that they should control growth in the top five enlisted pay grades. Table 4.10 shows the percentages of enlisted personnel in the top five enlisted pay grades (E-5 through E-9) for fiscal years 1980, 1987, and 1992. The table shows that these top five pay grades have increased by several percentage points relative to the overall enlisted force during downsizing; however, this growth is also part of a longer term growth trend. Our examination indicates that the greatest degree of growth is concentrated in pay grades E-6 and E-7. OSD and service officials offered a number of observations about the growth in pay grades. They stated that the changing technology and the complexity of today’s weapon systems have increased the requirement for more senior positions. These officials also stated that a smaller force is a more senior force; however, we do not necessarily agree that this has to be the case since our examination of the top officer pay grades shows a much smaller increase in senior pay grades. These officials further expressed the view that the current growth in senior enlisted ranks is temporary and that they expect some leveling off in the future with the changes in tenure rules, and use of SERBs and early retirements; they also stated that OSD and service controllers are watching the increase because of budgetary concerns. It is not clear to us that this is necessarily a temporary situation given the long-term trend shown in table 4.10. We do agree, however, that this is a situation that should be watched from a budgetary standpoint. Although DOD has already achieved much of its previously planned force reductions, additional reductions are planned. However, as indicated in chapter 2, effective downsizing requires more than the curtailment of recruiting; it requires maintaining significant levels of recruiting and preserving a continuous personnel stream. This situation adds to the importance of using the various force shaping and downsizing tools discussed in chapter 3 to achieve downsizing goals in a manner that retains a balanced force and viable career opportunities for the future. Use of these tools has helped to shape and minimize distortions in the force but, as indicated in chapter 4, there are some continuing changes in selected force profiles that are part of much longer term trends. Collectively, the issues discussed in this report help to focus attention on some issues we believe are key to future force reduction decision-making. These issues include continuing changes in overall force profiles, pace of future reductions, correlating such reductions to changes in force structure, and determining what accession levels should be, factoring in multiple trade-offs. However, the recent drawdown experience does more to offer perspectives on these issues than to suggest simple, fixed answers for the future. Optimum force profiles in terms of experience levels, average pay grades, and officer/enlisted ratios are not clearly established and would likely be difficult to develop with any degree of uniformity among the services. Nevertheless, because of their impact on readiness and costs, we believe that general trends in force profiles should be considered by the administration and the Congress in deciding future force levels. At the same time, we believe that areas such as officer/enlisted ratios and increasing average pay grades could benefit from more in-depth analyses to determine to what extent the growth represents validated requirements and how future downsizing decisions are apt to affect these requirements. We expect to complete more in-depth reviews of personnel requirements in the future. In considering how quickly further reductions should occur, it will be important to consider that DOD’s military personnel system experiences significant turbulence with high levels of personnel turnovers during normal times and that this turbulence can be compounded by required reductions to end strength. The turbulence is not caused simply by the exodus of personnel; it also includes the intake of new personnel to be trained and added to units. Turbulence also is associated with the normal rotation of personnel from one assignment or unit to another. Another consideration is the impact of personnel reductions on unit manning levels. Army officials acknowledge that, during the current drawdown, personnel reductions have occurred more quickly than have changes in unit force structure, creating some undesired undermanning of units. While they expect this problem to correct itself as future unit structure reductions occur, they recognize and have some concern that the problem could be exacerbated with likely increases in personnel reduction targets unless the targets are well correlated with additional structural reductions. In determining accession levels, it will be important to consider the short- and long-term potential impact of any significant reduction in accession levels on the force. In the short term, significant reductions in accession levels can be attained but, in doing so, they can result in a relatively more costly, but more experienced force—as the average pay grades for the remaining force increase. From a long-term perspective, significant curtailment of accessions today can pose greater potential for force imbalances in the future—imbalances associated with a higher graded force and reduced promotion and career opportunities for younger members of the force. One additional factor that could affect accession levels is the quality of new recruits. DOD has recently noted a drop in the quality of new recruits during the first half of fiscal year 1993. This trend, should it continue, could suggest the need to make some further trade-offs between high accession levels and the loss of quality, experienced personnel. DOD has a number of tools available to help shape the force as reductions occur, tools that can be used in combination to produce a balanced force across pay grades, year groups, and skill areas. DOD has accomplished a majority of its previously planned active duty force reductions; however, senior DOD officials have indicated that further reductions are planned as the result of a recently completed review of future DOD needs. DOD and the services have reduced accession levels over previous years, but are still recruiting large numbers of personnel each year as part of their efforts to preserve and sustain a balanced force for the future. DOD has used various force shaping authorities or tools to help drawdown the military forces in a balanced manner, with an emphasis on voluntary reductions. DOD’s ability to meet force reduction targets while seeking to retain a balanced remaining force has been enhanced by the various authorities, including incentives and transition assistance that the Congress has provided. Many of these special authorities expire at the end of fiscal year 1995. Extension of these authorities is warranted if the Congress desires to continue its emphasis on minimizing involuntary reductions, as end-strength reductions continue to occur. DOD’s approach to force reductions, along with its continued emphasis on accessions, has helped to control, although not entirely contain, some proportional personnel growth in years of service and rank. A negative consequence of such growth is increased cost; however, a positive aspect is increased experience levels. These two issues will require trade-offs in force shaping. Given that DOD’s downsizing is apt to continue for several more years, the Congress, to the extent it desires a continuing emphasis on minimizing involuntary separations, may want to consider extending the use of special financial separation incentive programs beyond the current deadline of fiscal year 1995. Also, the original authority for use of the separation incentives was limited to those who had attained 6 years of service at the time the legislation was enacted in December 1991. Therefore, the Congress may also want to amend the legislation to include all who have attained 6 years of service within the time frame for which the incentives are authorized as a means of broadening the pool of persons eligible for the incentives and minimize the potential for greater involuntary separations in the future. DOD fully concurred with the report’s findings and matters for congressional consideration. (See app. IV.)
Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) compliance with congressional guidance and authorizations in military downsizing, focusing on: (1) DOD progress towards meeting reduction targets; (2) the effects of downsizing on new recruiting or accessions; (3) voluntary and involuntary reduction actions to meet downsizing objectives; (4) the accomplishment of downsizing across various groups of officers and enlisted personnel by years of service and its effect on force profiles; and (5) issues important to future reduction decisions. GAO found that: (1) DOD has accomplished the majority of its planned active duty force reductions, and its fiscal year (FY) 1993 personnel level will be about 21 percent below its FY 1987 personnel level; (2) as a result of the DOD bottom-up review, further reductions below planned FY 1995 levels are probable; (3) DOD and the services have reduced planned personnel accession levels by 34 percent, but they are continuing to recruit large numbers of personnel in order to sustain a balanced force across various pay grades and skill areas and preserve future career opportunities and military capabilities; (4) personnel turnover rates are higher than force reduction rates due to uncertainties, career anxiety, and force-shaping decisions; (5) DOD has given priority to voluntary separations through early release, retirement, and financial incentives, but the pool of likely candidates for voluntary separation is declining; (6) involuntary reduction actions involve higher retention standards, mandatory retirement of selected personnel, and reduction-in-force for nonretirement-eligible personnel; (7) the ratio of officers to enlisted personnel and average pay grades have increased slightly during downsizing; and (8) issues that could impact future force reduction decisions include future force profiles, how quickly the force can be reduced, what accession levels should be, and what cost trade-offs are most desirable between a younger or a more experienced force.
Medicare, authorized in 1965 under Title XVIII of the Social Security Act, is a federal health insurance program providing coverage to individuals 65 years of age and older and to many of the nation’s disabled. HCFA uses about 70 claims-processing contractors, called intermediaries and carriers, to administer the Medicare program. Intermediaries primarily handle part A claims (those submitted by hospitals, skilled nursing facilities, hospices, and home health agencies), while carriers handle part B claims (those submitted by providers, such as physicians, laboratories, equipment suppliers, outpatient clinics, and other practitioners). The use of incorrect billing codes is a problem faced both by public and private health insurers. Medicare pays part B providers a fee for each covered medical service identified by the American Medical Association’s uniformly accepted coding system, called the physicians’ Current Procedural Terminology (CPT). The coding system is complicated, voluminous, and undergoes annual changes; as a result, physicians and other providers often have difficulty identifying the codes that most accurately describe the services provided. Not only can such complexities lead providers to inadvertently submit improperly coded claims, in some cases it makes it easier to deliberately abuse the billing system, resulting in inappropriate payment. The examples in table 1 illustrate several coding categories commonly used in inappropriate ways. Commercial claims-auditing systems for detecting inappropriate billing have been available for a number of years; as early as 1991, commercial firms marketed specialized auditing systems that identify inappropriately coded claims. The potential value of such a system to Medicare has been noted both by the HHS Inspector General (in 1991) and by us (in 1995). In fact, both the Inspector General and we noted that such a tool could save the Medicare program hundreds of millions of dollars annually. Recognizing its need to address the inappropriate billing problem, HCFA directed its carriers to begin developing claims auditing edits in February 1991. In August 1994, it awarded a contract to further develop these claims auditing edits, called CCI, which it now owns and operates. According to HCFA, the CCI edits helped Medicare save about $217 million in 1996 by successfully identifying inappropriate claims. Nevertheless, inappropriate coding and resulting payments continue to plague Medicare. Last summer HHS’ Office of Inspector General reported that about $23 billion of Medicare’s fee for service payments in fiscal year 1996 were improper, and that about $1 billion of this amount was attributable to incorrect coding by physicians. On September 30, 1996, HCFA initiated action to improve its capability to detect inappropriate claims and payment. It awarded a contract to HBO & Company (HBOC), a vendor marketing a claims-auditing system, to test the vendor’s system in Iowa and evaluate whether it could be effectively used throughout the Medicare program. Our objective was to determine if HCFA was using an adequate methodology for testing the commercial claims auditing system in Iowa for potential implementation with its Medicare claims processing systems. To do this, we analyzed documents related to HCFA’s test, including the test contract, test plans and methodologies, test results and status reports, and task orders. This analysis included assessing the limitations of the test contract, size of the test claims processing sample, representation of users involved with the test, and information provided to management in its oversight role. We also met with HCFA staff responsible for conducting the test to obtain further insight into HCFA’s test methodology. While we reviewed the reports of HCFA’s estimated savings, we did not independently validate the reported savings by validating the sample of paid claims used as the basis for projecting them. However, the magnitude of HCFA’s estimated savings is in line with our earlier estimate of potential annual savings from such systems. We observed operations at the test site in Des Moines, Iowa, and assessed the carrier officials’ role in the test. We visited HBOC offices in Malvern, Pennsylvania, and the Plano, Texas, headquarters of Electronic Data Systems (EDS), the part B system maintainer, into whose system the claims-auditing system was integrated. During these visits, we documented these companies’ roles and responsibilities in testing the system. Also, in August 1997 at a 3-day conference at HCFA headquarters, we observed the test team’s effectiveness and objectivity in discussing the progress made to date and in developing solutions to issues still needing resolution. We compared the adequacy of HCFA’s test methodology with the methodologies used by other public health care insurers to test and integrate a commercial claims-auditing system. We visited offices of these insurers and analyzed documents describing their test and integration approach. Finally, we compared the approach used by these insurers with HCFA’s. The insurers whose methodologies we analyzed consisted of the Department of Defense’s TRICARE support office (formerly called the Civilian Health and Medical Program of the Uniform Services (CHAMPUS)) in Aurora, Colorado; Civilian Health and Medical Program of the Department of Veterans Affairs (CHAMPVA) in Denver, Colorado; and the Kansas and Mississippi state Medicaid agencies in Topeka, Kansas, and Jackson, Mississippi, respectively. To evaluate HCFA’s decisions regarding national implementation of a commercial claims-auditing system, we reviewed the contract and other documents related to the test and evaluated their impact on HCFA’s ability to implement a claims-auditing system nationally. We also discussed HCFA’s rationale for these decisions with senior HCFA officials. Finally, to assess HCFA’s experience in acquiring and using the HCFA-owned CCI claims auditing edits, we reviewed the CCI contract (and related documents). We discussed this project and its results with cognizant HCFA officials. We performed our work from July 1997 through March 1998, in accordance with generally accepted government auditing standards. HCFA provided written comments on a draft of this report. These comments are presented and evaluated in the “Agency Comments and Our Evaluation” section of this report, and are included in appendix I. HCFA used a test methodology that was comparable with processes followed by other public insurers who have successfully tested and implemented such commercial systems. HCFA’s test showed that commercial claims auditing edits could achieve significant savings. Other public insurers—CHAMPVA, TRICARE, and the Kansas and Mississippi Medicaid offices—each used four key steps to test their claims-auditing systems prior to implementation. Specifically, they (1) performed a detailed comparison of their payment policies with the system’s edits to determine where conflicts existed, (2) modified the commercial system’s edits to comply with their payment policies, (3) integrated the system into their claims payment systems, and (4) conducted operational tests to ensure that the integrated systems properly processed claims. These insurers’ activities were comprehensive and required significant time to complete. CHAMPVA took about 18 months to integrate the commercial system at one claims processing site. TRICARE took about 18 months to integrate the system at two sites. It allowed about 2 years to implement the modified system at its nine remaining sites. HCFA’s methodological approach was similar. From the contract award on September 30, 1996, through its conclusion on December 29, 1997, HCFA and contractor staff made significant progress in integrating the test commercial system at the Iowa site and evaluating its potential for Medicare use nationwide. HCFA used two teams to concentrate separately on the policy evaluation and technical aspects of the test. The policy evaluation team consisted of HCFA headquarters individuals and Kansas City (Missouri) and Dallas regional office staff knowledgeable of HCFA policies and the CPT billing codes, as well as individuals representing the Iowa carrier and HBOC. This team conducted a detailed comparison of the commercial system’s payment policy manuals with Medicare policy manuals to identify conflicting edits. The reviews identified inconsistencies that both increased and decreased the amount of Medicare payments. For example, the commercial system pays for the higher cost procedure of those deemed mutually exclusive, while Medicare policy dictates paying for the lower cost procedure. Conversely, the commercial claims-auditing system denies certain payments for assistant surgeons, whereas Medicare policy allows these payments. These and all other conflicts identified were provided to the vendor, who modified the system’s edits to be consistent with HCFA policy. The technical team consisted of staff from HCFA’s headquarters and its Kansas City (Missouri) and Dallas regional offices; HBOC; EDS; and the Iowa carrier. This team prepared and carried out three critical tasks. First, it developed the design specifications and related computer code necessary for integrating the commercial system into the Medicare claims-processing software. Second, it integrated the claims auditing system into the Medicare part B claims-processing system. Finally, the team conducted numerous tests of the integrated system to determine its effect on processing times and its ability to properly process claims. HCFA management was kept apprised of the status of the test through biweekly progress reports and frequent contact with the project management team. HCFA reported that the edits in this commercial system could save Medicare up to $465 million annually by identifying inappropriate claims. Specifically, the analysis showed that the system’s mutually exclusive and incidental procedure edits could save about $205 million, and the diagnosis-to-procedure edits would save about $260 million. HCFA’s analysis was based on a national sample of paid claims that had already been processed by the Medicare part B systems and audited for inappropriate coding with the HCFA-owned CCI edits. While we reviewed the reports of HCFA’s estimated savings, we did not independently verify the national sample from which these savings were derived. However, the magnitude of savings when added to the savings from CCI, which HCFA reported to be about $217 million in 1996, is in line with our earlier estimate that about $600 million in annual savings are possible. Test officials also concluded that the claims-processing portion of the test system’s software provides little, if any, added value since the existing part B claims processing system already handles this function. Further, the test showed that integrating the commercial system’s claims-processing function with the existing claims processing system could significantly increase processing time and delay payment. On November 25, 1997, HCFA officials notified the administrator about the success of the commercial system test. They reported that the test showed that the system’s claims auditing edits could save Medicare up to $465 million annually, which is in addition to the savings provided by the CCI edits. Despite the success of the test, two key management decisions, if left unchanged, could have significantly delayed national implementation. One decision was to limit the test contract to the test, and not include a provision for nationwide implementation, thus delaying implementation of commercial claims auditing edits into the Medicare program. The second—HCFA’s initial plan following the test to award a contract to develop its own edits rather than acquiring commercial edits such as those used in the test—would have potentially not only required additional time before implementation, but could well have resulted in a system that is not as comprehensive as commercially available edits. In March 1998, the Administrator of HCFA, told us that HCFA’s plans have changed. She said HCFA (1) is evaluating legal options for expediting the contracting process, and (2) now plans to begin immediately to acquire commercial claims auditing edits. HCFA limited the use of the test system to its Iowa testing site—just one of its 23 Medicare part B claims-processing sites and did not include a provision for implementation throughout the Medicare program. As a result, additional time will be needed to award another contract to implement either the test system’s claims auditing edits or any other approach throughout the Medicare program. A contracting official estimated that it could take as much as a year to award another contract using “full and open” competition—the contracting method normally used for such implementation. This would involve preparing for and issuing a request for proposals, evaluating the resulting bids, and awarding the contract. HCFA’s estimated savings of up to $465 million per year demonstrate the costs associated with delays in implementing such payment controls nationwide. Awarding a new contract could result in additional expense to either develop new edits or for substantial rework to adapt the new system’s edits to HCFA’s payment policy if a contractor other than the one performing the original test wins the competition. If another contractor became involved, this would mean that much of the work HCFA performed during the 15-month test would have to be redone. Specifically, this would involve evaluating the new claims auditing edits for conflict with agency payment policy. Instead of limiting the test contract to the test site, HCFA could have followed the approach used by TRICARE, which awarded a contract that provided for a phased, 3-year implementation at its 11 processing sites following successful testing. In March 1998, HCFA’s administrator told us that HCFA is doing what it can to avoid any delay resulting from this limited test contract. She said HCFA is evaluating legal options to determine if other contracting avenues are available, which would allow HCFA to expedite national implementation of commercial claims auditing edits. In reporting the test results, HCFA representatives recommended that the HCFA administrator award a contract to develop HCFA-owned claims-auditing edits, which would supplement CCI, rather than to acquire these edits commercially. They provided the following key reasons for this position. First, they said this approach could cost substantially less than commercial edits because (1) HCFA would not always be required to use the same contractor to keep the edits updated, (2) it would not be required to pay annual licensing fees, and (3) the developmental cost would be much less than using commercial edits. Second, they said this approach would result in HCFA-owned claims-auditing edits, which are in the public domain, allowing HCFA to continue to disclose all policies and coding combinations to providers—as is currently done with the CCI edits. They also explained that if a vendor of a commercial claims auditing system chooses to bid, wins this contract, and agrees to allow its claims auditing edits to be in the public domain as they are with CCI, HCFA will allow the vendor to start with its existing edits, which should shorten the development time. We do not agree that this approach is the most cost-effective. First, upgrading the edits by moving from the contractor who develops the original edits to one unfamiliar with them would not be easy and could be costly because this is a major task, which is facilitated by a thorough clinical knowledge of the existing edits. For example, the Iowa test system contains millions of edits, which would have to be compared against annual changes in the CPT codes. Second, the annual licensing fees that HCFA would avoid with HCFA-owned edits would be offset somewhat by the need to pay a contractor with the clinical expertise offered by commercial vendors to keep the edits current. Third, while the commercial edits could cost more than HCFA-owned ones, this increased cost has been justified by HCFA’s test results, which demonstrated that commercial edits provide significantly more Medicare savings than HCFA-developed edits. Regarding HCFA’s initial plan to fully disclose the HCFA-owned edits as they are with CCI, this policy is not mandated by federal law or explicit Medicare policies, nor is it followed by other public insurers, and it could result in potential contractors declining to bid. In a May 1995 memorandum from HHS to HCFA, the HHS Office of General Counsel concluded that federal law and regulations do not preclude HCFA from protecting the proprietary edits and related computer logic used in commercial claims auditing systems. Further, according to HCFA’s deputy director, Provider Purchasing and Administration Group, HCFA has no explicit Medicare policies that require it to disclose the specific edits used to audit providers’ claims. Likewise, other public health care insurers, including CHAMPVA, TRICARE, and the two state Medicaid agencies we visited, do not have such a policy, and are indeed using commercial claims-auditing systems without disclosing the details of the edits. Rather than disclose the edits, these insurers notified providers that they were implementing the system and provided examples of the categories of edits that would be used to check for such disparities as mutually exclusive claims. This approach protects the proprietary nature of the commercial claims auditing edits. Finally, the development time would likely be shortened if a commercial claims auditing vendor is awarded this contract and uses its existing edits as a starting point. However, if the request for proposals requires that these edits be in the public domain, it is doubtful that such vendors would bid on this contract using their already developed edits. An executive of a vendor that has already developed a claims auditing system told us that his company would not enter into such a contractual agreement if HCFA insists on making the edits public, because this would result in the loss of the proprietary rights to his company’s claims auditing edits. Although HCFA’s then director of the Center for Health Plans and Providers, recommended that HCFA develop its own edits, he also acknowledged that this approach could result in a less effective system than use of a commercial one. In a November 25, 1997, memorandum to the administrator assessing the results of the commercial test, the director stated that there were several “cons” to developing HCFA-owned edits. He concluded that “the magnitude of edits approved for national implementation could potentially be less , depending on the number of edits developed and reviewed for acceptance prior to the implementation date.” He also stated that “there could be a perception that HCFA is unwilling to take full advantage of the technology and clinical expertise offered by vendors.” Furthermore, HCFA’s initial plan to develop its own claims-auditing edits was inconsistent with Office of Management and Budget (OMB) policy in acquiring information resources. OMB Circular A-130, 8b(5)(b) states that in acquiring information resources, agencies shall “acquire off-the-shelf software from commercial sources, unless the cost-effectiveness of developing custom software to meet mission needs is clear and has been documented.” HCFA has not demonstrated that its plan to develop HCFA-owned claims auditing edits is cost-effective. A key factor showing otherwise is HCFA’s estimate that every year it delays implementing claims auditing edits of the caliber of those used in the commercial test system in Iowa, about $465 million in savings could be lost. Developing comprehensive HCFA-owned claims auditing edits could take years, during which time hundreds of millions of dollars could be lost annually due to incorrectly coded claims. To illustrate: HCFA began developing its CCI database of edits in 1991 and has continued to improve it over the past 6 years. While HCFA reported that CCI identified about $217 million in savings (in the mutually exclusive and incidental procedure categories) in 1996, CCI did not identify an additional $205 million in those categories identified by the test edits nor does it address the diagnosis-to-procedure category, where the test edits identified an additional $260 million in possible savings. Furthermore, HCFA has no assurance that the HCFA-owned edits would be as effective as available commercial edits. In March 1998, after considering our findings and other factors, the Administrator, HCFA told us that she now plans to take an approach consistent with the test results. She said she plans to acquire and implement commercial claims auditing edits. HCFA followed an approach in testing and evaluating the commercial claims auditing system that was consistent with the approach used by other public health care insurers. This test showed that using this system’s edits in the Medicare program can save up to $465 million annually. However, the Medicare program is losing millions each month that HCFA delays implementing such comprehensive claims auditing edits. Two critical HCFA decisions could have unnecessarily delayed implementation for several years and prevented HCFA from taking full advantage of the substantial savings offered by this technology. These decisions—to limit the test contract to the test and not include a provision for national implementation, and to develop HCFA’s own edits rather than acquiring commercial ones—would have resulted in costly delays and could have resulted in an inferior system. However, we believe these decisions were appropriately changed by the administrator in March 1998. The administrator’s current plans for expediting national implementation and acquiring commercial claims auditing edits should, if successfully implemented, help HCFA take full advantage of the potential savings demonstrated by the commercial test. To implement HCFA’s current plans to expeditiously realize dollar savings in the Medicare program through the use of claims auditing edits, we recommend that the Administrator, Health Care Financing Administration proceed immediately to purchase or lease existing comprehensive commercial claims auditing edits and begin a phased national implementation, and require, in any competition, that vendors have comprehensive claims auditing edits, which at a minimum address the mutually exclusive, incidental procedure, and diagnosis-to-procedure categories of inappropriate billing codes. HCFA agreed with our recommendations in this report and stated that it is proceeding immediately with a two-phased approach for procuring and implementing commercially developed edits for the Medicare program. During the first phase, HCFA plans to immediately implement procedure-to-procedure edits, such as those described in the mutually exclusive and incidental procedure categories in table 1. According to HCFA, the second phase will be used to complete its determination of the consistency of diagnosis-to-procedure edits with Medicare coverage policy—which was begun during the test—and then implement the edits as quickly as possible. HCFA added that, as part of this process, it will also consider modifying national coverage policy, where appropriate, to meet program goals. It cautioned that the amount of the projected savings from the commercial test may decrease once its full analysis is complete. We are encouraged that HCFA concurs with our recommendations and is proceeding immediately to take advantage of this commercial claims auditing tool, which can save Medicare hundreds of millions of dollars annually. HCFA’s comments and our detailed evaluation of them are in appendix I. As agreed with your offices, unless you publicly announce its contents earlier, we will not distribute this report until 30 days from the date of this letter. At that time, we will send copies to the Secretary of Health and Human Services; the Administrator, Health Care Financing Administration; the Director, Office of Management and Budget; the Ranking Minority Members of the House Committee on Commerce and the Senate Special Committee on Aging; and other interested congressional committees. We will also make copies available to others upon request. If you have any questions, please call me at (202) 512-6253, or Mark Heatwole, Assistant Director, at (202) 512-6203. We can also be reached by e-mail at willemssenj.aimd@gao.gov and heatwolem.aimd@gao.gov, respectively. Major contributors to this report are listed in appendix II. The following are GAO’s comments on the Department of Health Care Financing Administration’s letter responding to a draft of this report. 1. We are encouraged that HCFA concurs with our recommendations and is proceeding immediately to take advantage of this commercial claims auditing tool. If effectively implemented, according to test results, commercial claims auditing edits should save Medicare hundreds of millions of dollars annually. Further, we are pleased that, in addition to determining that the commercial edits are consistent with HCFA policy, HCFA also plans to evaluate its national coverage policy to determine if it also needs modification. This dual assessment should improve the overall effectiveness of the final implemented edits. Finally, although the amount of HCFA’s projected savings may decrease once its full analysis is complete, its projected annual savings of $465 million is so large that, most likely, even a reduced figure will still be significant. 2. As stated, the HHS Office of the Inspector General identified its findings through a manual review. The Inspector General’s report findings included examples of improper billing for incidental procedures. Thus, commercial systems could have detected some of the errors identified in the Inspector General’s report. While HCFA is correct in asserting that other identified problems would not typically be identified by the type of commercial claims editing system discussed in this report, other types of automated analytical claims analyses systems are available to examine profiles of provider submitted claims for targeting investigations of potential fraud. See our reports titled Medicare: Antifraud Technology Offers Significant Opportunity to Reduce Health Care Fraud (GAO/AIMD-95-77, Aug. 11, 1995) and Medicare Claims: Commercial Technology Could Save Billions Lost to Billing Abuse (GAO/AIMD-95-135, May 5, 1995). 3. We considered HCFA’s suggested wording changes and have incorporated them as appropriate. John B. Mollet, Senior Evaluator John G. Snavely, Staff Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed whether the Health Care Financing Administration (HCFA) used an adequate methodology for testing the commercial claims auditing system for potential nationwide implementation with its Medicare claims processing system. GAO noted that: (1) the test methodology HCFA used in Iowa was consistent with the approach used by other public health care insurers who have already implemented a commercial claims auditing system; (2) HCFA's test covered 15 months and included extensive work, such as modifying the system's software to comply with Medicare payment policies; (3) the test showed that the commercial claims auditing system could save Medicare up to $465 million annually with claims auditing edits that detect inappropriately coded claims; (4) these savings are in addition to any results from the correct coding initiative which, according to HCFA, saved Medicare about $217 million in 1996; (5) while HCFA used an adequate methodology to test the system and demonstrated that commercial claims auditing edits could result in significant savings, two critical management decisions would have unnecessarily delayed implementation for several years, resulting in potentially hundreds of millions of dollars in lost savings annually; (6) first, HCFA limited its 1996 test contract to the test, and did not include a provision for implementing the commercial system throughout the Medicare program; (7) thus, to acquire a commercial system for nationwide implementation, up to an additional year may be required to complete all activities necessary to plan for and award another contract; (8) this could also result in substantial rework to adapt the system if a different contractor were to win the new contract; (9) HCFA's administrator told GAO that HCFA is evaluating legal options for expediting the contracting process; (10) second, in addition to the potential delay from the test contract limitation, following the test HCFA initially planned to develop its own claims auditing edits rather than acquire commercial edits, such as those used in the test; (11) under this plan, HCFA would have obtained a development contractor that may, or may not, have existing claims auditing edits; (12) if the winning contractor did not have existing edits on which to build, it could take years to complete the HCFA-owned edits; (13) near the conclusion of GAO's review HCFA representatives told GAO this approach would have allowed them to make the edits available to the public and avoid being obligated to one vendor's commercial edits and related fees; and (14) public health care insurers for the Department of Defense and the Department of Veterans Affairs and several state Medicaid agencies did not take this approach, opting to lease commercial systems instead of owning the claims auditing edits.
In implementing ERRP, CCIIO is responsible for, among other things, determining which plan sponsors are eligible to participate in the program and providing reimbursements to the participating sponsors. Eligibility for participation is determined by a plan sponsor meeting a number of requirements, including being able to document claims, implement programs and procedures that have the potential to generate cost savings for plan participants with chronic and high-cost conditions, and having policies and procedures in place to detect and reduce fraud, waste, and abuse. When requesting reimbursement, sponsors must provide documentation of the cost of medical claims, which can include costs paid by early retirees in the form of deductibles, copayments, or coinsurance. For eligible claims paid by a plan on behalf of each early retiree, CCIIO will reimburse 80 percent of the amount that exceeded $15,000 (the cost threshold) but was not greater than $90,000 (the cost limit) in a given year. ERRP reimbursement requests are paid in the order in which they are received, and CCIIO may stop taking ERRP applications or, if an application is approved, deny all or part of a reimbursement request, based on the availability of funding. Plan sponsors are not required to use ERRP reimbursements by the end of the plan year in which they are provided, but are expected to use reimbursements as soon as possible and no later than December 31, 2014. Under PPACA, plan sponsors can use the reimbursements to reduce their own premium contributions or other health benefit costs; reduce plan participants’ premium contributions, copayments, deductibles, coinsurance, or other out-of-pocket costs; or reduce any combination of these costs. However, sponsors are not permitted to use the funds received as general revenue, and thus must maintain the same level of contribution toward the plan as they did prior to applying to enroll in ERRP. CCIIO may conduct audits of plan sponsors to verify their compliance with this and other program requirements. To be eligible for PCIP, individuals must have a preexisting condition and have been without creditable coverage for at least 6 months prior to application. This requirement effectively prevents enrollment by those who were already insured, thus limiting the program to individuals who likely have been unable to access insurance because of their preexisting condition. PCIP programs must not impose waiting periods for coverage based on the enrollee’s preexisting condition, and plan benefits must cover at least 65 percent of the total cost of coverage until enrollees hit a statutory out-of-pocket spending limit, at which point PCIP covers 100 percent of the cost. PPACA requires that HHS develop procedures to transition PCIP enrollees from the program to the Exchanges when they begin in 2014 and that such procedures ensure these enrollees do not experience a lapse in coverage. At the same time, if HHS estimates that, for any fiscal year, PCIP funding will be insufficient to cover the payment of claims, PPACA authorizes it to make such adjustments as are necessary to eliminate this deficit and to stop PCIP enrollment. The Children’s Health Insurance Program is a joint state and federal program for uninsured children in families whose incomes are too high for Medicaid, but too low to afford private coverage. Initial PCIP allocations ranged from $8 million for North Dakota, Vermont, and Wyoming to $761 million for California. CCIIO reserves the right to reallocate unused funds to states in future years from initial allocations and make adjustments as necessary to eliminate any potential deficit due to projected expenses exceeding a state’s allocation. to operate the PCIP in their state. Twenty-seven states elected to operate a PCIP for their residents, while 23 states and the District of Columbia opted to allow HHS to operate their PCIPs. For the 27 states that chose to operate their own PCIPs, HHS directly contracted with states or their designated nonprofit entities. The contracts established that HHS would reimburse states or their designated entities for claims and administrative costs incurred in excess of the premiums they collected. To implement the federally run PCIP for the 24 states that opted not to operate their own PCIP, HHS coordinated with other federal agencies and selected the Government Employees Health Association, Inc. (GEHA) to help operate the program. GEHA was awarded a cost-plus-award fee contract, which established that HHS would reimburse GEHA for claims and administrative costs in addition to granting fixed and performance-based award fees. For both the federally and state-run PCIPs, HHS established a limit on administrative costs of no more than 10 percent of total spending over the lifetime of the program. CCIIO stopped accepting applications for ERRP enrollment in May 2011, anticipating the $5 billion appropriation would be exhausted. As we previously reported, at that time the total number of approved plan sponsors was more than 6,000—most of which enrolled within the first 6 months of the program—and CCIIO had already spent $2.4 billion reimbursing plan sponsors for claims incurred. Officials told us that in September 2012, CCIIO suspended making reimbursements to plan sponsors, with reimbursements having exceeded the $4.7 billion cap established for paying claims under the original appropriation nearly a year earlier. In anticipation of exceeding the cap, CCIIO had issued guidance on December 13, 2011, stating that it would not accept reimbursement requests for ERRP claims incurred after December 31, 2011. However, the program continued to accept requests for claims incurred on or before this date and officials explained that a number of factors led to it taking until September 2012 for all $4.7 billion to be spent, including that reimbursements must go through a clearance process to make sure funds are paid appropriately. When the $4.7 billion was reached, significant demand for the program remained with 5,699 ERRP reimbursement requests left outstanding that accounted for about $2.5 billion in unreimbursed claims. CCIIO is recovering portions of the $4.7 billion from plan sponsors that were overpaid and using those funds to pay outstanding reimbursement requests in the order in which they were received. Overpayments are identified through the claims adjudication process and can happen when, for example, a plan receives a rebate from a provider that lowers the total cost of a claim after the claim was initially submitted to ERRP. In addition, because early reimbursement requests were based on summary claims data, CCIIO required plan sponsors to submit a more detailed accounting of the actual costs of these requests by April 27, 2012. Officials told us that if an overpayment was identified for a reimbursement request, or if sponsors failed to meet the April deadline, costs associated with that request were to be recovered by CCIIO. As of January 2013, CCIIO had identified $60.2 million in overpayments and recovered $54 million of this amount. CCIIO was pursuing collection of the remaining $6.2 million and estimated that as much as an additional $15 million in overpayments may be identified and collected in fiscal year 2013. In addition to recovering overpayments, officials told us that any money recovered from program audits would also be used to pay outstanding reimbursement requests. CCIIO hired a contractor with a goal of conducting audits of 30 ERRP plan sponsors that officials said account for 30 percent of program reimbursements. As of January 2013, CCIIO had initiated 17 audits, but had not yet received any audit reports from the contractor. Consequently, officials told us that they were not yet able to estimate how much recovered ERRP funds would be identified through this process. PCIP enrollment has grown substantially. By the end of December 2012, cumulative enrollment had reached 103,160, up more than 50,000 from a year earlier when enrollment was 48,862. (See fig. 1.) Enrollment in state- run PCIPs represents a larger percentage of total enrollment compared to the federally run PCIP; however, the federally run PCIP has accounted for an increasing percentage of the total over time. When we last reported on the PCIP program, as of April 2011, enrollment in the federally run PCIP represented about 26 percent of total enrollment; by the end of December 2012, it represented about 43 percent. CCIIO officials told us that their decision to accept, starting in July 2011, a letter from a health care provider as proof of a preexisting condition in the federally run PCIP likely contributed to this shift—although CCIIO later reversed this decision in May 2012. Similar to prior months, PCIP enrollment continued to vary widely across states, ranging from 1 in Vermont to 15,101 in California. By the end of January 2013, cumulative PCIP spending reached about $2.6 billion, representing over half of the $5 billion appropriated for the program.when about $782 million had been spent, representing about 16 percent of the total appropriation. (See fig. 2.) Similar to the trend in enrollment, the federally run PCIP has accounted for an increasing percentage of total program spending. The percentage of PCIP spending used for This represents a substantial increase from a year earlier administrative costs has declined over time and by the end of December 2012 it had fallen to about 7 percent. PCIP spending has varied on a monthly basis, but overall, monthly spending also has increased over the life of the program. Most recently, monthly spending reached its highest point since the program’s inception, increasing about 35 percent from the end of December 2012 to the end of January 2013. (See fig. 3.) PCIP spending also varied across states, with some states spending more than they were originally allocated and some states spending less. For example, CCIIO officials said that as of January 2013, three states— New Hampshire, South Dakota, and Utah—have spent more money than they were originally allocated because of higher than expected enrollment or per member costs. Additionally, CCIIO has obligated to five states— Alaska, Colorado, Montana, Oregon, and New Mexico—more funding in 2012 than was in their original allocation, but these states have yet to spend the additional funds. Officials also said that other states have had lower than expected expenditures—for example, North Carolina was originally allocated $145 million, but, through December 2012, had only spent about $26 million due primarily to lower than expected enrollment. Thus, officials said that CCIIO reallocated money originally intended for these states to the states that were exceeding their expenditure projections. According to CMS, PCIP spending is likely to approach the $5 billion appropriation in 2013. In June 2012, CMS’s OACT released a projection of PCIP spending, and reported the entire $5 billion in funding would be used “through 2013.” When asked for more specifics, OACT officials told us that this projection was not intended to produce a point-in-time estimate for when the program would run out of money, but rather represents their expectation that the entire $5 billion appropriation would be needed to pay for care provided through 2013. Officials also said that their projection was informed by historical enrollment, cost, and utilization data, as well as discussions with CCIIO staff about program experience. Officials noted that the historical data they used were through February 2012—the most recent program data available at that time. CCIIO officials similarly told us that they anticipate total PCIP spending will be close to $5 billion, and that they are taking program management steps—many of which are not yet reflected in the spending data— intended to ensure that the appropriated funding lasts until the end of 2013. For example, on behalf of the federally run PCIP, GEHA contracted in August 2012 with United Healthcare to access lower provider reimbursement rates than those the federally run program had previously. While the extent of this rate reduction varies by state, officials said that there has been about a 20 percent reduction on average. To further reduce rates, GEHA also worked with United Healthcare to approach approximately the top 100 hospitals in terms of PCIP utilization to attempt to renegotiate federally run PCIP hospital facility fees to the same rate as Medicare. According to officials, about one quarter of the hospitals approached agreed to the renegotiation. Officials told us that some states have similarly approached hospitals to lower reimbursement rates, negotiated other discounts with providers, or implemented other cost control strategies, such as disease management programs. More recently, CCIIO instituted benefit changes for the federally run PCIP that shifted more costs onto enrollees starting in January 2013. For example, it increased enrollees’ out of pocket maximum for in-network services from $4,000 to $6,250 and for out-of-network services from $7,000 to $10,000. It also increased enrollee coinsurance from 20 percent to 30 percent. As another step, CCIIO officials said that whereas they had previously had annual contracts with states under the PCIP program, in 2013, they moved to quarterly contracts that will allow them to allocate funding based on each state’s near-term expenditures and thus prevent over-obligation of funds. Finally, due to growing concerns about the rate of PCIP spending, in February 2013, CCIIO suspended PCIP enrollment to ensure the appropriated funding would be sufficient to cover claims for current enrollees through the end of the program. In addition, CCIIO requested that state-run PCIPs institute the same benefit changes that were instituted in the federally run program in January 2013 by April 1, 2013, or the earliest possible date thereafter for all of their enrollees. Officials told us that if spending trends begin to indicate that funding will not be used as quickly as they are projecting, they could reinstate PCIP enrollment to use remaining funds. We provided a draft of this report to HHS for comment. HHS provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. In addition to the contact named above, Randy DiRosa, Assistant Director; George Bogart; Laura Brogan; Richard Krashevski; Yesook Merrill; Laurie Pachter; and Rachel Svoboda made key contributions to this report. Pre-Existing Condition Insurance Plan: Comparison of Implementation and Early Enrollment with the Children’s Health Insurance Program. GAO-12-62R. Washington, D.C.: November 10, 2011. Private Health Insurance: Implementation of the Early Retiree Reinsurance Program. GAO-11-875R. Washington, D.C.: September 30, 2011. Pre-Existing Condition Insurance Plans: Program Features, Early Enrollment and Spending Trends, and Federal Oversight Activities. GAO-11-662. Washington, D.C.: July 27, 2011.
In March 2010, the Patient Protection and Affordable Care Act (PPACA) appropriated $5 billion each to establish and carry out two temporary programs--ERRP and PCIP. ERRP reimburses sponsors of employment-based health plans to help cover the cost of providing health benefits to early retirees--individuals age 55 and older not eligible for Medicare. The PCIP program is a high-risk pool that provides access to health insurance for individuals unable to acquire affordable coverage due to a preexisting condition. Both programs are operated by CCIIO within CMS (an agency within the Department of Health and Human Services) and are intended to operate through 2013, after which PPACA will provide new insurance coverage options. GAO was asked to provide updated information on ERRP and PCIP spending. This report describes the current status of ERRP and PCIP enrollment and spending as well as projected PCIP spending and how CCIIO is ensuring that program funding is sufficient through 2013. GAO obtained the most recent data available on ERRP and PCIP enrollment and spending and on overpayments recovered from ERRP plan sponsors during the claims adjudication process. GAO also obtained other supporting documentation where available. GAO interviewed CMS officials about ERRP and PCIP enrollment and spending as well as their predictions of future PCIP spending and steps they are taking to ensure the sufficiency of PCIP funding. The Department of Health and Human Services provided technical comments on a draft of this report, which GAO incorporated as appropriate. The Center for Consumer Information and Insurance Oversight (CCIIO) discontinued enrollment in the Early Retiree Reinsurance Program (ERRP) in early 2011 and stopped most program reimbursements the following year to keep spending within the $5 billion ERRP appropriation. Specifically, anticipating exhaustion of funds, CCIIO stopped ERRP enrollment in May 2011. According to CCIIO officials, CCIIO suspended making reimbursements to plan sponsors in September 2012, as reimbursements had reached the $4.7 billion cap established for paying claims under the original appropriation, and the remainder was reserved for administrative expenses. When the cap was reached, significant demand for the program remained with 5,699 ERRP reimbursement requests left outstanding that accounted for about $2.5 billion in unpaid claims. CCIIO officials told GAO that they planned to pay some of the outstanding reimbursement requests by redistributing any overpayments recovered from plan sponsors--when, for example a plan receives a rebate that lowers the total cost of a prior claim--as well as money recovered from program audits. As of January 2013, officials told GAO that CCIIO had recovered a total of $54 million and redistributed $20.7 million of this amount. Enrollment and spending for the Pre-existing Condition Insurance Plan (PCIP) program have grown substantially. Cumulative PCIP enrollment had reached 103,160 by the end of December 2012, more than doubling from a year earlier. By the end of January 2013, total PCIP spending reached about $2.6 billion, representing over half of the $5 billion PCIP appropriation compared to a year earlier when only about 16 percent of the total appropriation had been spent. PCIP spending has varied on a monthly basis, but overall, monthly spending also has increased over the life of the program. Most recently, monthly spending reached its highest point since the program's inception, increasing about 35 percent from December 2012 to January 2013. According to CMS, PCIP spending is likely to approach the $5 billion appropriation by the end of 2013, and CCIIO is taking steps intended to ensure it does not exceed this amount. In June 2012, Office of the Actuary (OACT) within the Centers for Medicare & Medicaid Services (CMS) released a projection that the entire $5 billion in PCIP funding would be used "through 2013." Similarly, CCIIO officials told GAO they anticipate total PCIP spending to closely approach $5 billion, and that they are taking program management steps--many of which are not yet reflected in spending data--to ensure appropriated funding lasts through 2013. For example, in the second half of 2012, CCIIO was able to obtain lower provider reimbursement rates for the PCIP program. Also, in January 2013, CCIIO instituted benefit changes that shifted more costs onto PCIP enrollees, including by increasing enrollee coinsurance from 20 percent to 30 percent in many states. Due to growing concerns about the rate of PCIP spending, in February 2013, CCIIO suspended PCIP enrollment to ensure the appropriated funding would be sufficient to cover claims for current enrollees through the end of the program. Officials told GAO that if spending trends begin to indicate that funding will not be used as quickly as they are projecting, they could reinstate PCIP enrollment to use remaining funds.
FAA’s planning for the midterm includes improvements based on existing technologies that respond to recommendations made in 2009 by the RTCA task force. The agency seeks to demonstrate tangible NextGen benefits to build industry support and encourage future needed investments from airlines and others to complete the transformation of the air-traffic control system. Industry investments can be significant; for example, FAA estimates that it would cost $260,000 in 2011 dollars to equip—or $525,000 to retrofit—a commercial aircraft with a Required Navigation Performance (RNP) package, which allows precision curved flight paths. In 2012, 50 percent of the domestic commercial aircraft fleet was RNP equipped. In 2011, RTCA reported that 80 percent of the airline fleet at high-density airports might need to be RNP equipped to accrue significant benefits for operators. In total, FAA estimates that airlines will need to invest $6.6 billion—of the estimated $18.1-billion overall implementation cost shared between airlines and FAA—on avionics through 2018 to realize the full potential benefits from NextGen capabilities. The RTCA task force and the NAC work group identified priority operational improvements that could provide substantive benefits and are viewed as feasible to implement between now and the end of 2018, and we grouped these into three improvement areas: Performance Based Navigation (PBN), which uses satellite-based guidance to route aircraft and improve approaches at airports. There are two main types of PBN procedures, including Area Navigation (RNAV) and RNP, which vary in the level of precision guidance they can provide. Enhanced airborne and surface traffic management, which includes tools that help air traffic controllers merge and sequence planes in the air and on the ground. Additional or revised aviation safety standards, such as those that establish the minimum required distances between aircraft in the air or minimum visibility distances to the ground. These changes are made possible by leveraging advances in technology and are anticipated to maintain or enhance safety. FAA and the aviation industry have emphasized the interrelated nature of NextGen’s many components (see fig. 1). Although NextGen improvements in each of these three areas offer some benefits when implemented individually, they achieve the greatest benefits when integrated, according to FAA officials, air traffic controllers, and other industry stakeholders, including airline representatives. Through 2018, FAA’s implementation of key NextGen operational improvement areas is focused on 30 core airports and key air-traffic control facilities. These air-traffic control facilities include terminal radar- approach control (TRACON) facilities and the 20 traffic control centers that manage enroute traffic throughout the NAS. In an effort to help FAA prioritize the implementation of NextGen, in 2012 the NAC work group identified seven priority multi-airport metroplexes based on an assessment of operational need (see fig. 2). Because of the integrated nature of the NAS, improvements or changes to a portion of the airspace or at one airport can affect other parts of the system. A number of offices within FAA, including ATO, the Office of Aviation Safety, and the Aeronautical Navigation Product Group (AeroNav Products), are involved in the management and implementation of NextGen, as well as the NextGen Office, which oversees implementation and reports directly to the Deputy Administrator. The NextGen Office is tasked with linking NextGen’s strategic objectives with operational requirements in an effort to ensure integration and implementation across FAA program offices. The NextGen Office includes a Performance and Outreach Office that is tasked with providing information on implementation progress, enabling successful collaboration and decision making with internal and external stakeholders, and reporting on performance measurements. At present, the position heading the NextGen office is vacant, which is further discussed later in this report. FAA’s Office of Environment and Energy develops and coordinates policy relating to NextGen’s environmental impact, including noise and emissions. In 2011, this office developed a new NextGen National Environmental Policy Act (NEPA) Plan to help ensure timely, effective, and efficient environmental review of proposed NextGen improvements. To address the majority of current flight delays throughout the NAS, the RTCA task force identified the implementation of new PBN procedures as a high priority initiative. Requests for new air-traffic control procedures, including PBN procedures, can come from a variety of sources, including airlines, airports, Congress, or individual air-traffic control facilities. According to FAA, there are core steps and processes that are common to the development of most procedures and involve a number of offices within the agency. ATO designs and develops procedures and conducts environmental reviews. According to FAA officials, environmental reviews typically take from 30 days to 2 years, depending on project factors such as the presence of sensitive environmental resources (e.g., national parks) and the potential for significant impacts such as noise or emissions. ATO also helps implement new procedures once they have been published by providing needed documentation or training to air traffic controllers. The Office of Aviation Safety establishes design criteriaprocedures and conducts safety testing, such as flight simulations testing that includes controllers and pilots. It also grants operations approval and certification for aircraft equipment used to fly air-traffic control procedures. AeroNav Products tests new procedures against design criteria and includes new procedures on published charts for pilots. AeroNav Products also maintains existing procedures, which need to be assessed every 2 years to assure that they still meet criteria and that conditions have not changed to render the existing procedures unsafe. The other priority operational improvements, including those related to airborne and surface traffic management and enhanced standards, are largely managed by offices within ATO and the Office of Aviation Safety. The conditions around an airport may change over time, making procedures unsafe. For example, new construction may result in taller structures around an airport or other changes that would affect minimal altitude requirements. categorical exclusion—then the agency generally need not prepare an environmental assessment or environmental impact statement. For the development of new flight procedures, FAA assesses the potential environmental impacts of proposed changes, including changes to carbon dioxide emissions and noise levels for communities below the new or changed routes. New or revised routes above 3,000 feet above ground level (AGL) typically qualify for categorical exclusion in the absence of extraordinary circumstances. Additionally, the FAA Modernization and Reform Act introduced two new categorical exclusions, one of which categorically excluded RNAV and RNP procedures below 3,000 feet AGL at core airports and certain other airports, absent extraordinary circumstances.extraordinary circumstances would include significant increases in noise over noise-sensitive areas (e.g., homes, schools, hospitals) under the new or changed flight path. Noise screening and carbon dioxide emissions analysis are required for procedures from 3,000 feet AGL to 7,000 feet AGL for arrivals and up to 10,000 feet AGL for departures. Noise screening may be required up to 18,000 feet AGL for special resources, such as national parks or wilderness areas. For changes closer to the ground, below 3,000 feet AGL, more environmental review may be required because of the potential for significant noise or emissions increases. Figure 3 illustrates the appropriate level of NEPA review needed for actions at various heights AGL. FAA is concentrating its operational improvement efforts at key airports and metropolitan areas and focusing primarily on PBN procedures, including in its Optimization of Airspace and Procedures in the Metroplex (OAPM) initiative and another effort in the Seattle metropolitan area called “Greener Skies over Seattle” (Greener Skies). Increasing the number and use of PBN procedures is viewed as a way to accelerate the delivery of benefits, such as fuel savings, to airlines in some of the most- congested metroplex areas. To deliver benefits more quickly and avoid some obstacles that have hampered prior NextGen efforts, FAA has made trade-offs in selecting sites and the scope of proposed improvements, concentrating on those projects that can demonstrate some benefits in the midterm and leaving more time-consuming but potentially higher benefit-yielding projects for the longer term. The agency has also made some progress in the other key operational improvement areas, such as upgrading airborne traffic management to enhance the flow of aircraft in congested airspace and revising standards to enhance airport capacity. By contrast, FAA has made more limited progress enhancing surface traffic management at airports, which will likely limit overall benefits in the midterm. Finally, there has been little integration of key operational improvements, which limits the potential benefits offered by any single improvement as well as the potential impacts on the NAS. FAA’s primary effort to implement new PBN procedures in the midterm is the OAPM initiative, which focuses on priority metroplexes with airport operations that have a large effect on the overall efficiency of the NAS. This initiative is also designed to provide benefits to airlines and airports in those metroplexes. If OAPM proceeds as planned, FAA expects to begin to demonstrate benefits at the eight sites that are currently active by the end of 2015. (See fig. 4.) Projects at five additional sites are expected to be fully operational before the end of 2017, according to current FAA plans. With the exception of the Houston project, each OAPM project has about a 3-year implementation time frame, which includes 12 to 18 months for the environment assessment process. OAPM focuses primarily on implementing PBN procedures—long viewed as a cornerstone of NextGen—and any necessary airspace redesign for deployment of the new procedures.foundation for flight paths, airspace design, route separation, and obstacle clearance. (See fig. 5 for an illustration of these procedures.) Potential PBN benefits include shorter, more direct flight paths, reduced aircraft fuel burned—and resulting reductions in carbon dioxide emissions—and reduced noise in surrounding communities. The following are the key types of PBN procedures. RNAV procedures, which are enabled by technology available on nearly all commercial aircraft in the United States, provide aircraft with routing flexibility and more efficient flight paths than conventional procedures, and can allow improved access to airports in congested airspace or in bad weather. In 2011, 96 percent of the domestic commercial fleet was equipped for RNAV. As of January 10, 2013, there were over 12,500 public RNAV procedures available for all aircraft capable of flying them. RNP procedures (a subset of RNAV) add additional onboard-aircraft performance monitoring and alerting and require additional equipment as well as specialized crew training. In some cases, RNP can increase aircraft access to airports in adverse weather and terrain and help air traffic controllers keep aircraft operations at one airport from interfering with aircraft operations at adjacent airports by using curved flight paths. As mentioned above, in 2012, approximately 50 percent of the domestic commercial fleet was equipped for RNP. As of January 10, 2013, 352 public RNP procedures are included on charts for pilots. Optimized Profile Descent (OPD) procedures allow aircraft to descend from cruise altitude to final approach more efficiently, eliminating or reducing the level offs or step downs of a traditional descent. Low or idle engine power settings save fuel and reduce emissions. OPD procedures also require less dialogue between air traffic controllers and pilots, which may improve safety by reducing the potential for miscommunications. According to reports by teams planning OAPM implementation, OAPM’s benefits for airspace users will stem mostly from implementing OPD procedures enabled through the use of RNAV and the resulting reductions to fuel use and associated fuel costs. (See fig. 6.) For example, FAA projected that shorter routes and OPDs at its eight active sites could save at least 29-million gallons of fuel and reduce carbon dioxide emissions by 299,000 metric tons annually when fully implemented. In turn, improved efficiency and predictability at key metroplexes is expected to improve the efficiency of the NAS. FAA estimated total annual benefits resulting from new OAPM procedures and associated airspace changes using aircraft simulators. As previously mentioned, all eight active sites are predicted to begin demonstrating benefits in the 2013 through 2015 time frame. To achieve the time frames of its OAPM initiative, FAA has made trade- offs, which are summarized below, between procedures that yield some benefits and can be implemented relatively quickly and those that could result in greater benefits but would take much longer to implement. Excluded new procedures that would require route changes below 3,000 feet AGL or very close in to the airport. Although all new flight procedures require NEPA review, those deemed to have extraordinary circumstances such as significant environmental impacts—including a significant increase in noise or emissions around an airport—would require a full environmental impact statement, which can take several years to complete. By excluding changes closer to the airport, FAA is seeking to avoid the lengthy environmental reviews that have delayed the implementation of some other FAA efforts. For example, we previously reported that FAA’s airspace redesign effort in the New York, New Jersey, and Philadelphia area has provoked significant community opposition, including legal challenges to the environmental review process used by FAA. That effort, which began in 1998, is currently scheduled for completion in 2016. Representatives from airlines, equipment manufacturers, and industry associations that we spoke with acknowledged that there could be additional efficiency benefits from new PBN procedures closer to the airport. For example, new procedures that would allow for tight turns for the arrival into the airport can reduce flight times and associated fuel use and costs and facilitate the flow of air traffic flying into or out of different airports in a metroplex, as well as increasing predictability of flight schedules. Nonetheless, most of these stakeholders did not believe that these potential additional benefits warranted the longer project timeframes that would be necessary to complete more detailed environmental reviews. Excluded new procedures that would require new design criteria. FAA officials explained that having to wait for new design criteria for procedures could jeopardize the OAPM time frames. Outside of the OAPM initiative, FAA acknowledged that its staff had at times initiated work on a requested PBN procedure only to discover that the design criteria—which ensure the safety of procedures—do not yet exist for the desired procedure. FAA officials stated that new design criteria would be needed to more widely deploy PBN procedures, but that effort is being undertaken independently of the OAPM initiative. Excluded sites with ongoing airspace redesign projects. Concerns about potential implementation delays also factored into FAA’s decision about which metroplexes to address in the midterm. Some industry stakeholders have voiced concerns that FAA did not include in its current OAPM plans the New York/Philadelphia metroplex, which is the nation’s most congested airspace and contributes to over half of domestic flight delays. However, FAA decided to exclude that metroplex in light of the Record of Decision for the existing environmental reviews for FAA’s ongoing airspace redesign work for that area, because the agency did not want to initiate a new environmental review process. In addition to the OAPM initiative, FAA has other PBN initiatives that aim to deliver midterm benefits in less congested areas. For example, FAA’s Greener Skies project, which was initiated by Alaska Airlines, aims to deliver benefits to the Seattle metroplex beginning in 2013, and was shaped by local considerations. new set of PBN procedures planned for implementation in March 2013. Greener Skies was initiated by Alaska Airlines in collaboration with Boeing, other airlines, and the Port of Seattle, which operates Seattle-Tacoma International Airport. Greener Skies became an FAA-sponsored NextGen initiative in 2010. The procedures are designed to shorten flight tracks and route aircraft over water. FAA estimates that the new Greener Skies procedures would reduce annual fuel consumption by 112,420 barrels annually, resulting in potential annual savings of $13.5 million. To facilitate implementation of the project, a number of potentially beneficial procedures were scoped out of the Greener Skies effort based on local concerns. For example, FAA officials and other Greener Skies participants stated that new procedures to the east of the airport were not included because of known community concerns about new aviation noise in those areas and to avoid any changes that could violate noise commitments made in a recent Record of Decision. In addition to Greener Skies, FAA also has non-OAPM PBN efforts in place in Denver and Minneapolis. According to analysis done by Alaska Airlines, of the 27 RNP charts that are carried by the airline’s flight crews, 5 of the routes in Alaska were flown more than 40 percent of the time, while at least 11 of the routes in the lower 48 states were flown less than 1 percent of the time. number of times by air traffic controllers. Southwest Airlines has expressed similar concerns about not being able to obtain projected benefits of new PBN procedures. For example, in 2011 the airline reported that its usage of RNP procedures had dropped, in part, because approval to use existing procedures was often not granted by air traffic controllers. Some controllers told us that using new PBN procedures can be difficult for a number of reasons, including a lack of guidance and tools, which will be further discussed later in this report. Finally, in some cases, pilots prefer to fly traditional routes—particularly if the PBN route is longer or less efficient than a shortcut that may be approved by an air traffic controller on a traditional procedure when conditions allow it. According to FAA officials, when conditions do not allow for such shortcuts, pilots can use the PBN procedures. FAA does not currently have a system to track the use of PBN procedures, and is unable to provide information on the extent to which existing procedures are either unused or are used on a limited basis. There are currently no automatic data collection systems to track the use of procedures, either on the aircraft flying the routes or at the air-traffic control facilities managing those aircraft. FAA officials stated that current efforts to track the use of procedures through pilot reporting have been hindered by insufficient and unreliable data. Without a way to systematically measure the use of particular procedures, the agency may not recognize routes that need to be revised to ensure that airlines are able to get expected benefits such as reduced fuel use or improved access in bad weather. As we have previously reported, critical success factors for goal setting and performance management by leading organizations include systematically measuring their performance to guide goal-setting, managerial decision-making, resource allocations, and day-to-day operations. Through 2018, FAA is focusing on updating its Traffic Management Advisor (TMA), which is an airborne arrival-sequencing program that assigns times when aircraft destined for the same airport should cross certain points in order to reach the destination airport at a specific time and in an efficient order. TMA can enhance the effectiveness of new PBN procedures, particularly when controllers are mixing traffic using different types of procedures and aircraft with different levels of equipment (e.g., RNP equipped mixed with non-RNP equipped aircraft). For example, controllers can better merge aircraft on conventional, straight flight paths with those on PBN curved approaches and obtain a clearer picture of traffic on the ground and in terminal airspace when TMA is used with surface management tools (see fig. 7). Currently, TMA is primarily used for arrival sequencing by certain air-traffic control facilities at times when the demand for arrivals exceeds the capacity of a specific airspace or airport. The upgrades could allow for TMA to be used more often, for more purposes (e.g., sequencing aircraft further away from the airport), and at additional facilities. Many of the active OAPM teams recommended upgrades to TMA’s capabilities for their respective air-traffic control facilities; such upgrades could provide significant benefits to priority metroplexes, as well as at core airports faced with similar congestion issues. For example, the North Texas OAPM study team recommended separating the traffic to Dallas/Fort Worth International Airport from Dallas Love Field’s airport traffic more efficiently, largely through TMA upgrades. North Texas OAPM team members told us that implementation of the procedures they are developing would depend principally on planned TMA upgrades. Likewise, the Houston OAPM study team recommended that arrival procedures it identified for the Houston metroplex to increase efficiency be managed using an enhanced TMA system. FAA has worked to align its plans for upgrading the TMA system with issues and concerns raised For example, FAA plans to launch a new time-based by OAPM teams.metering capability for PBN, which could help facilitate the corresponding launch of new PBN procedures in four priority metroplexes. As part of its upgrades to airborne traffic management, FAA is also deploying a system to improve air traffic management in metroplexes by sharing information between adjoining air-traffic control facilities. The new system allows these facilities to share information and workload. It has been deployed at air-traffic control facilities for key metroplexes— including Atlanta, Los Angeles, Newark, and San Francisco—and FAA plans to complete the installation at at least three other sites by the end of 2014. Finally, FAA is implementing a system that will enable the sequencing of aircraft further from the destination airport—current sequencing typically occurs at the border between the enroute and terminal airspace. This system is to be implemented at one site in 2014, and the agency plans to subsequently install it at all others, but locations and timeframes have not been specified. FAA-led surface-traffic management enhancements are not expected to begin to be implemented until 2015 at the earliest, mostly through the greater use of automated departure-queue management programs that These existing are already in place at a number of metroplex airports.programs include queue-managements programs currently in use at John F. Kennedy International Airport (JFK) in New York and in Memphis, for example, which allow pilots to put their aircraft into a virtual departure queue before leaving the gate or ramp area instead of taxiing out and waiting on the runway for takeoff. Fuel savings, reduced taxiway congestion, and enhanced safety are among the benefits. FAA is working to determine which surface-traffic management capabilities to implement through testing with air traffic controllers, airlines, general aviation, and airport operators. While the airports have not yet been identified and the capabilities are still being determined, according to FAA officials, the agency tentatively plans to complete the rollout of enhanced surface- traffic management improvements by the end of the midterm. FAA is also developing a new surface-management capability system, the Terminal Flight Data Manager, but does not plan to implement it until at least 2017, which will likely limit potential midterm benefits. According to FAA, reductions in surface traffic congestion largely depend on the new system’s implementation. Further, the Terminal Flight Data Manager is also key to producing desired efficiency benefits—that is increasing arrivals and departures at busy metroplex airports where demand for runway capacity is high or where there are multiple runways with conflicting traffic. As a result, the agency will not be able to manage traffic throughout all phases of a flight—referred to as “end-to-end metering”— until these improvements are completed, including the integration of TMA enhancements with the Tower Flight Data Manager.envisioned for the midterm, end-to-end metering is now scheduled to be implemented in the long term (by 2025). FAA has already implemented systems to increase the safety of surface traffic. To improve safety for taxi and surface movement at airports, for example, FAA installed Airport Surface Detection Equipment–Model X (ASDE-X)—a ground monitoring system—for 35 major airports from 2003 to 2011. FAA is also installing Airport Surface Surveillance Capability (ASSC) for another 9 busy and complex airports. These operational improvements were prioritized by the RTCA task force and enhance safety and traffic flow on runways, taxiways, and some ramps and allow for collaborative decision making among air traffic controllers and pilots. FAA has recently approved a few revisions to existing standards, which should benefit a handful of airports in the midterm, but further revisions are required before the envisioned efficiency and capacity benefits of midterm NextGen improvements can be fully realized. A key component of FAA’s NextGen plans involves updating separation and other flight safety standards to better accommodate modern aircraft and advances in technology. Separation standards—required minimum distances used for safely spacing aircraft from other aircraft, terrain, and objects—have a large effect on airport capacity and the overall capacity of the NAS. Consequently, according to FAA and industry stakeholders, updating separation and other standards could increase the efficiency, capacity, and predictability of the NAS. By contrast, if standards are not updated to facilitate the use of new technologies and procedures, projected NextGen benefits might not be achievable to the same degree. Such revisions to standards can be time intensive because safety assessments are required to ensure the safety of the changes. Figure 8 provides examples of key additional or revised standards that FAA is pursuing through 2018. Recent work completed by FAA’s Closely Spaced Parallel Operations working groupairport and several smaller airports. In 2008, the working group initiated a could soon provide benefits to one large metroplex series of research studies to investigate the potential for reducing runway separation standards—the required distance between runway centerlines for simultaneous use—to provide increased arrival and departure capacity in all weather conditions. After a lengthy safety assessment, the working group determined in 2011 that this standard could be reduced from 4300 feet to 3600 feet. FAA is proceeding with the implementation of the new standard. According to FAA officials, once this new standard is implemented, it will benefit four airports immediately. FAA’s 2012 NextGen Implementation Plan indicates that such reductions in runway separation standards should improve overall capability on runways, especially during poor weather conditions, but does not provide any quantitative estimates of benefits from this new standard. FAA has also recently revised standards in two key metroplex areas in an effort to increase capacity and efficiency. In October 2011, FAA implemented a new standard that decreases the required angle of divergence between aircraft using RNAV departure procedures on the same or parallel runways at Hartsfield- Jackson Atlanta International Airport—the busiest airport in the NAS. According to FAA, this reduction has given controllers the ability to allow between 8 and 12 additional aircraft to depart the airport every hour and is expected to save airlines $10 million annually from reduced fuel burn on taxiways. Throughout 2011 and 2012, FAA implemented several revised standards at San Francisco International Airport (SFO) that FAA officials said could improve airport efficiency. An FAA official and industry representatives who participated in this initiative noted that these revisions should help address capacity issues at the airport created by regional wind and fog patterns. Revised standards include a lower visibility minimum for certain types of approaches, as well as departures. FAA also increased the use of the airport’s optimal runway configurations during various wind conditions. FAA has had varying success in integrating its NextGen implementation efforts, and stakeholders see opportunities for additional integration to better deliver benefits in the midterm. In 2010, the NAC approved of FAA’s plans to focus its early OAPM efforts on new PBN procedures and airspace changes to expedite the delivery of benefits for operators, but suggested that FAA incorporate additional operational improvements— such as revised standards—into future OAPM efforts. In 2012, the NAC recommended that FAA incorporate into future OAPM efforts additional midterm operational improvements, such as enhanced airborne and surface-traffic management tools and other capabilities to enhance the capacity of metroplex areas. FAA has coordinated the development of PBN procedures with the implementation of airborne-traffic management tools in some OAPM projects when study teams identified improvements that would facilitate the implementation or usage of new PBN routes, but this integration has not been systematic for current OAPM efforts. For example, in response to a request from the North California OAPM team, FAA has added the San Francisco metroplex to the list of metroplexes that will receive an upgraded TMA system, which would allow the enroute center to manage traffic in concert with those air-traffic control facilities that manage surrounding airspace. The consideration of other non-PBN improvements, however, has been done at the discretion of OAPM teams—rather than being included as a goal of the overall OAPM effort— and has been largely limited to enhancements to TMA. More broadly, FAA’s obstacles study pointed to the lack of airborne management tools as a key obstacle to the use of existing PBN procedures, including tools that help air traffic controllers sequence aircraft and better predict and visualize the flight trajectories of aircraft on PBN procedures. These tools are needed to fully use RNP curved approaches in congested metroplex areas, according to the study. One such tool will not be operationally available until 2016, according to FAA’s 2012 NextGen Implementation Plan, and the plan did not clearly indicate how or where this capability would be rolled out. Stakeholders have raised concerns that the lack of some key tools will slow the potential benefits of PBN efforts, including those associated with the OAPM initiative. Likewise, as mentioned above, the rollout of surface-traffic management improvements are scheduled to begin in 2015 at a few airports, which may hinder FAA’s ability to deliver the full benefits of its other improvement efforts, including PBN. As noted above, FAA’s current operational improvement efforts have involved certain trade-offs to achieve some near and midterm benefits, in large part, because of the context within which these improvements are being made. FAA has long-established processes and requirements in place that have made the U.S. airspace among the safest in the world. A number of those processes are, however, complex, lengthy, and at times, unclear as they relate to new technologies, procedures, and capabilities. FAA has a number of efforts under way to help overcome previously identified, overarching obstacles to NextGen implementation, such as streamlining processes and updating the air traffic controller handbook Many of these efforts are scheduled to and procedure design criteria.take a number of years, particularly when proposed changes must be evaluated to ensure that they will maintain, if not enhance, the system’s current level of safety. Some, such as those aimed at increasing stakeholder involvement in planning and implementation of PBN procedures, do not, however, fully address previously identified obstacles. Nor do they change FAA’s overall approach to identifying potential PBN procedures for development or amendment, which relies on requests from airlines and other stakeholders without determining their impact on improving efficiency in the NAS. Finally, continued uncertainty about the FAA’s leadership of NextGen affects the agency’s ability to manage and oversee the various improvements and efforts needed to achieve the full implementation of NextGen. FAA and others have identified the process for developing PBN and other new flight procedures as a challenge. For example, in 2009 the RTCA task force recommended streamlining the operational approval and certification processes for new flight procedures. Likewise, an FAA report described the existing process as a bundle of interconnected, overlapping, and sometimes competing processes. It also found variations and contradictions in existing guidance on procedure development and implementation, which result in a process that is “far from optimal, frequently generates rework, and on occasion results in the implementation of low- or no-benefit procedures.” To address these challenges, FAA initiated the Navigation Lean (NAV Lean) initiative to focus on streamlining the implementation and amendment processes for all flight procedures, releasing a report with planned improvements in 2010. FAA anticipates that the initiative will cumulatively cut 40 percent off the time needed to implement new procedures (assuming a full environmental impact statement is not required), though it acknowledged that it will be difficult to measure actual time saved. ATO and the Office of Aviation Safety share responsibility for overseeing the initiative, which began with the identification of overarching issues that negatively affect procedure implementation efficiency. The NAV Lean working groups identified nine issue areas with 21 associated recommendations, which focus, among other things, on minimizing the workload and delayed implementation associated with minor amendments of procedures, amending agency guidance to clarify and promote preparation of focused environmental assessments, and overcoming challenges to the development and implementation of criteria for flight procedures. (See fig. 9.) According to the NAV Lean implementation plan, all planned improvements are scheduled to be completed from 2013 through 2015. FAA envisions that some are likely to produce benefits soon after implementation. However, FAA has acknowledged that it will have difficulty setting a baseline from which to measure many of the NAV Lean improvements. Agency officials, for example, told us that it would not be possible to determine how long the current PBN procedure implementation process takes both because the process varies for each effort and because agency databases do not track the amount of time taken for individual steps in the process. They explained that the more than 40 percent cumulative NAV Lean timesaving estimate was developed by asking officials the amount of time they expected to save in the procedure development process. In February 2013, FAA reported that it had made progress on all but one of the recommendations and had completed work on three recommendations, including a recommendation regarding the use of focused NEPA reviews in some circumstances. However, it is too early to determine outcomes associated with the implementation of these recommendations such as developing more procedures in less time. As part of addressing concerns about the length of its environmental review process, FAA released guidance on preparing concise and focused environmental assessments for proposed FAA actions (including new procedures) in January 2011. Lengthy environmental reviews have been identified as an obstacle to the timely implementation of PBN by FAA and others. Environmental considerations were frequently not addressed until late in the procedure development process. The NAV Lean working group found that previous FAA guidance on the preparation of environmental assessments did not adequately address circumstances where the environment analysis could be more narrowly focused on only certain potential environmental impacts. In those circumstances, FAA offices should be preparing environmental assessments that consider all impact categories for applicability and significance, but focus the analysis only on the impact categories (e.g., noise) where there is potential for significant impacts caused by the proposed action (i.e., procedures). FAA anticipates that for small, non-OAPM projects involving one airport, “focused” environmental assessments could potentially take from 3 to 6 months, with a cost of $300,000 or less. For more complex OAPM projects—involving multiple airports and the assessment of numerous new flight procedures—focused environmental assessments generally will have 12- to 18-month time frames. By contrast, FAA officials estimate that non-focused environmental assessments traditionally take 6 months to 2 years for new flight procedures and cost $300,000 to over $1 million. Although FAA has used focused environmental assessments for other types of proposed agency actions, FAA is first applying the new guidance to procedure-related actions for projects in Houston and Denver. Thereafter, the agency intends to use the new guidance at select OAPM sites (i.e., based on their complexity, number of potential environmental impacts, local considerations, and where proposed changes would not qualify for a categorical exclusion under NEPA), and will apply this approach to other projects as appropriate. FAA is also working to enhance or integrate several environmental screening and modeling tools—by including fuel burn analysis in its noise screening tools and incorporating environmental screening into a traffic simulator used to design PBN procedures. These screening tools allow procedure developers to evaluate environmental implications early in the design process and determine the potential for extraordinary circumstances that would warrant environmental assessments rather than categorical exclusions. FAA has also been developing a new tool—the Aviation Environmental Design Tool—to facilitate its environmental assessment process. The FAA reauthorization act included a second new categorical exclusion for new PBN procedures that would result in measurable reductions in fuel consumption, carbon dioxide emissions, and noise, on a per-flight basis, as compared to aircraft operations that follow existing procedures. cumulatively for all flights, and FAA has not yet identified an approach for such per-flight assessments. According to FAA officials, no currently available methodology resolves the technical problems involved in making such a determination, so the agency has not applied this new categorical exclusion. FAA officials have requested NAC’s input on how to address these technical challenges. Pub. L. No. 112-95, § 213(c)(1), (2), 126 Stat., 49 (2012). amendments from the regional and national prioritization processes. This would allow FAA to make minor changes to existing—but potentially underused—RNAV arrival and departure procedures more expeditiously. This could be an efficient and cost-effective way for FAA to increase PBN usage. While NAV Lean does not assign one FAA office responsibility for developing and implementing new procedures, implementation of several NAV Lean recommendations will provide additional tools to allow for better coordination among ATO, the Office of Aviation Safety, AeroNav Products, and others involved in the process. In 2012, FAA’s obstacles study pointed to the lack of an accountable FAA office for the development of PBN to oversee a coherent design, development, production, and implementation strategy for new procedures. FAA is developing a web-based system to allow each interested party to access procedure designs and suggest improvements or mitigate potential problems throughout the development process. This is expected to result in a more cohesive procedure-development process when implemented in 2015. Furthermore, NAV Lean efforts are also intended to strengthen the role of the United States Instrument Flight Procedures Panel to improve coordination among parties responsible for the development and implementation of procedure design criteria. The RTCA task force and the NAC work group have pointed to the importance of prioritizing the implementation of key operational improvements, including focusing on the most appropriate PBN options such as RNAV or RNP. FAA officials said that they are in the early stages of developing a toolbox for those requesting new procedures, which would match solutions to identified problems and allow the agency to better target its efforts. FAA does not currently assess individual procedure requests—which can be made by a number of parties, including airlines—to determine if the proposed new procedures would generate expected benefits or resolve problems for airports or airspace. Rather, once a request for a new PBN procedure is received it is prioritized for development as requested on a first-come, first-served basis. Requests for new RNP procedures do not currently trigger an assessment by FAA (or by the requester) of the potential to use a less- costly option to resolve the underlying problem or gain the expected benefits.procedures could be used by more aircraft and pilots than the more precise RNP curved routes that were being or had been developed. Over 90 percent of commercial aircraft are equipped for RNAV procedures— which also allow for curved flight paths—and the RTCA task force recommended that FAA should focus on leveraging RNAV and reserve RNP to locations where tighter turns are needed. FAA officials noted that, in some cases, new RNAV Industry stakeholders have argued that third parties could play a greater role in the development of flight procedures, a move that would help FAA respond to the current demand for new PBN procedures in the face of The FAA reauthorization act called for the limited agency resources. agency to establish a program for qualified third parties to develop, test, and maintain flight procedures. In May 2012, FAA awarded a $2.8- million contract to GE’s Naverus and a partner to develop two RNP approach procedures each at five mid-sized airports. The contractors are to design, evaluate, and maintain these RNP approaches and be responsible for providing environmental data and analysis to FAA to support categorical exclusions and for drafting any required NEPA reviews, for review and approval by FAA. According to FAA officials, the pilot project will allow FAA to assess the potential for third parties to have an expanded role in helping address those PBN procedures that FAA, FAA has because of a lack of resources, may be unable to address.made progress in recent years in developing a framework plan for leveraging third-party procedure developers and overseeing them. The potential of third party procedure development may be limited, however, given that there are currently two third-party procedure developers—GE’s Naverus and Boeing’s Jeppesen—that are eligible to develop public RNP approach procedures. The use of third-party developers may be more costly than the in-house FAA development and maintenance of procedures. FAA officials estimate that new RNP procedures cost $58,100 on average to develop, conduct safety testing, and implement— and $2,300 per year to maintain—when these efforts are undertaken in house. This total is significantly less than the $280,000 average cost for each of the 10 procedures that are being developed by the third party, although these FAA procedure-development costs do not include additional expenses for any NEPA reviews above a categorical exclusion. If an environmental assessment is required, then FAA costs could exceed $58,100, as the cost of conducting a focused environmental assessment can range from $0 to $300,000. Despite efforts to streamline the development of flight procedures, FAA does not have a process to proactively identify those PBN procedures that would best further NextGen goals. Much of the work done by the RTCA task force and the NAC work group has focused on prioritizing improvements, and the identification of needed new routes might prove beneficial in easing congestion in the NAS and key airspace, or in solving local problems that have ripple effects across the NAS. OAPM was designed, in part, to fill this void, but for airspace or airports that are not included in the initiative, FAA depends on stakeholders to initiate requests. Once requests are made, however, they are added to the procedure-development queue, and are not assessed against other PBN procedures in the queue to determine their respective potential to benefit the NAS or to resolve problems at specific airports. Further, requests may be driven by where a requesting airline flies and not where new procedures are most needed. In 2012, Airlines for America, an airline trade organization, led an effort—called 20/20—to identify those 20 new procedures most wanted by airlines, as well as the 20 procedures they viewed as most in need of amendment. Four of the identified new procedures were at airports included in the OAPM initiative, so participating airlines agreed that FAA should address those new procedures through ongoing airspace redesign efforts. Of the remaining 16 identified procedures, 13 were found to already be under development by FAA. Similarly, for procedures needing revision, FAA found that 13 of the 20 identified procedures were already in the process of being revised. In the absence of a procedure-development tracking tool, such as is being developed as part of NAV Lean, airlines were not able to monitor FAA’s procedure development process for these routes. In response to the 20/20 effort, FAA agreed to track the development of the 16 desired new routes on its website, although it is not tracking those procedures Without a systematic that were identified for revision on the website.means to identify procedures that are most critical to achieving NextGen goals and sharing information about its plans and progress in developing needed new procedures, it will be difficult for FAA to provide reasonable assurance that its efforts are efficiently delivering benefits. OAPM or similar efforts may present an opportunity to assess the utility of some existing, but underused, conventional air-traffic control routes in a more efficient, systematic way. FAA maintains more than 22,000 PBN and conventional procedures in the NAS, and the agency is looking to cancel underused or redundant flight procedures. As noted, these procedures cost $2,300 or more per year to maintain and may be used only occasionally, if at all.unneeded routes, in a 2011 report, the Flight Safety Foundation proposed a process to identify 800 such procedures for potential elimination, representing a 12 percent reduction in the total number of ground-based approach procedures and a 4 percent reduction in the total number of procedures. Identifying these procedures for decommissioning could result in savings of approximately $1.8 million per year—or about $18 million over 10 years—in maintenance costs. An official with AeroNav Products pointed out that when OAPM teams assess current needs within a metroplex’s airspace, they are ideally positioned to identify some of the existing procedures that could be decommissioned, although they are not currently tasked with assessing the continued utility of existing routes. Once good candidates for route decommissioning are identified, FAA could further assess these routes and begin the public-notification process that leads to decommissioning. The lack of design criteria can impede the development of new procedures. FAA’s obstacles study, for example, notes that AeroNav Products is often unable to design a requested procedure because the criteria for the procedure have yet to be developed by the Office of Aviation Safety. According to the report, this lack of a coherent design, development, production, and implementation strategy slows down the process and creates frustration among air traffic controllers and system users, such as airlines. FAA officials in the Office of Aviation Safety responsible for developing PBN criteria told us that their units have made progress in recent years in updating the design criteria to better use the capabilities of PBN to respond to requests for new procedures. They also told us that they are currently focused on clarifying and consolidating all the PBN criteria into one document to make it easier for air traffic controllers and others, and have in place several efforts related to specific design criteria, such as updating the criteria for holding—a maneuver used to delay an aircraft already in flight—for RNAV and RNP procedures. However, officials in the Office of Aviation Safety and other FAA officials acknowledged that much work remains to be done to develop new criteria before PBN can be deployed nationwide. Several officials also acknowledged that it can be difficult to meet user requests for new PBN design criteria given variations in terrain and changing technology, especially because the safety tests that are often required for changing or amending the design criteria can be time and labor intensive. RTCA has pointed to the potential for changes in the way FAA tracks and assesses any errors—notably losses of separation where aircraft come in closer proximity than allowed—made by air traffic controllers to encourage closer adherence to existing standards by eliminating incentives to add buffers between planes. For example, the RTCA task force recommended and FAA implemented a non-punitive reporting system for losses of separation. For more information about FAA’s new systems for assessing losses of separation caused by controllers, see GAO, Aviation Safety: Enhanced Oversight and Improved Availability of Risk- Based Data Could Further Improve Safety, GAO-12-24 (Washington, D.C.: Oct. 5, 2011). procedures to parallel runways. Potential solutions are then forwarded to FAA for consideration, such as a 2011 proposal that would better leverage the safety benefits of PBN to change certain separation standards for the use of parallel runways based on safety assessments conducted by the Greener Skies team. FAA’s primary effort to address issues with the air traffic controller handbook is also part of the Greener Skies initiative. The Greener Skies team has identified 95 needed changes in FAA orders and regulations to date to address obstacles that have contributed to limiting the usage of PBN procedures. FAA’s obstacles study noted that the lack of standard language for controllers and pilots for certain types of PBN procedures could create uncertainty in communications, which would require such a change to the handbook. Officials we interviewed at a Seattle-area air- traffic control facility acknowledged that they had known for years before the Greener Skies project began that the handbook was outdated. According to these officials, although FAA has published many PBN routes throughout the NAS, from a controllers’ perspective, there were few rules in place for using those procedures. For example, under the current handbook there is little guidance on how to safely give less than the standard separation for merging planes—as is often done for traditional procedures in clear weather conditions—even if the aircraft are on precise paths. The separation standards heavily influence the guidance in the controller handbook, because much of a controllers’ responsibility is to keep safe distances between aircraft. According to FAA’s obstacles study, these concerns have led controllers to not approve the use of PBN procedures, in some cases. FAA and others have also pointed to the need for additional training of air traffic controllers as a potential obstacle to the use of PBN procedures, and FAA’s obstacles study suggested developing a national training plan for PBN operations. While we did not look at the extent of training provided when PBN procedures are implemented at individual airports, the larger-scale initiatives in our review have included time and resources for controller training. For example, OAPM plans dedicate from 9 to 15 months to the implementation phase, which includes controller training. Officials with the North Texas team told us that some new OAPM procedures, such as OPDs, would require significant changes from the way local controllers traditionally managed aircraft, so adequate training would be especially important for the successful implementation of their OAPM procedures. Likewise, officials from the Seattle-area TRACON noted that it had taken them about 2 months to develop the controller training for using new Greener Skies procedures. FAA is making progress in systematically involving industry stakeholders, air traffic controllers, and other key subject matter experts in its initiatives, including OAPM and Greener Skies, as well as surface-traffic management initiatives. As we have previously reported, collaboration has been an ongoing challenge for FAA. For example, officials with the Port Authority of New York and New Jersey told us that the failure to include controllers early in the procedure design process for the airspace redesign effort for the New York, New Jersey, and Philadelphia area— some of the most complex and congested airspace in the world— contributed to the 4-year-plus implementation delay, because some proposed routes had to be amended following controller input. As such, we as well as others have made numerous recommendations that FAA should collaborate better with key stakeholders to facilitate the implementation of NextGen and enhance results. Many of these key stakeholders are also involved in other efforts to improve capacity in the NAS, such as the development of new or expanded runways, which are or will be pursued concurrently with NextGen. FAA officials, local controllers, airline officials, and others generally agreed that FAA has made significant progress in recent years in its ability to collaborate to achieve results. For example, FAA officials and industry stakeholders emphasized that OAPM is highly collaborative, as the study teams and design teams include local air traffic controllers and airline officials, FAA officials with experience in airspace redesign and other fields, environmental specialists, and others. The following are among the anticipated benefits of this collaborative approach. Enhances PBN usage: A number of FAA officials and air traffic controllers told us that FAA now recognizes that new procedures are much less likely to be used if controllers are not involved in the design. New procedures developed without controller input may not be feasible from an operational or safety perspective, and controllers may not see that the new routes are advantageous. Controllers serving the Seattle metroplex told us that their level of involvement in Greener Skies was more extensive and occurred earlier than in any previous procedure or airspace project. According to FAA officials and airline representatives, the inclusion of airline stakeholders in the design process also helped keep industry informed and involved and helps assure that the proposed procedures can be flown by participating airlines. Addresses community concerns: We have previously reported that the inclusion of airports in PBN procedure development and other projects can help address potentially adverse environmental—often noise- related—community impacts, since these entities often have primary responsibility for addressing community concerns and are likely more familiar than FAA with the airport’s environmental impacts and the surrounding communities. According to best practices established by ACRP regarding community involvement in airport projects, trust and respect are the keys to a long-term relationship between stakeholders—in this case between FAA and airport representatives, who are responsible for addressing community concerns about airport-related noise. While FAA has made progress involving airports in NextGen projects, several FAA officials, a representative of Airport Councils International- North America (ACI-NA), and officials from several airports said that FAA is not fully leveraging the expertise of airport officials about local community concerns, although the ACI-NA representative noted that FAA has begun to involve airports earlier as the OAPM effort has continued. Airport officials in one OAPM metroplex told us that FAA had not adequately included them in early planning for new PBN routes. Consequently, the airport hired environmental consultants to analyze, among other things, the potential noise impacts of proposed PBN procedures and submitted concerns to FAA. In addition, although the Port of Seattle was initially involved in designing procedures for Greener Skies, airport officials told us that they were concerned that FAA had not included them during the environmental assessment process or in conducting local outreach. The project has raised some community concerns about aircraft noise from new flight paths, and some neighborhoods have expressed concerns that FAA had not clearly explained the potential noise impact on their neighborhoods. New aviation noise is one of the largest obstacles to NextGen implementation, according to FAA officials and others. It can be difficult to address community concerns about aviation noise, but FAA may be able to mitigate such concerns by involving airport officials more closely in procedure design and community outreach efforts. FAA officials involved in another OAPM team, for example, noted that local airport officials, who were not included in initial route planning for the metroplex, later provided information about potential community impacts that FAA had not anticipated. Information provided by FAA on establishing OAPM study teams, however, does not include guidance on the timely involvement of airport representatives on these teams, if such involvement is appropriate; rather the information indicates that OAPM teams should brief airport authorities as the process continues. This is in contrast to the best practices established by ACRP, which state that educating—in this case briefing—interested stakeholders after the fact is not sufficient for effective involvement; rather, proactive involvement is required. A collaborative approach for NextGen that involves key stakeholders, such as airport officials, would better position FAA to fully leverage those stakeholders’ expertise, help identify possible solutions, and facilitate implementation of NextGen improvements. Although the RTCA task force and NAC work group did not make recommendations regarding NextGen organizational issues, more broadly, FAA has struggled to have the leadership in place to manage and oversee NextGen implementation. In the past, industry stakeholders have expressed concerns about the fragmentation of authority and lack of accountability for NextGen, two factors that could delay its implementation. Leading practices of successful organizations reflect that programs can be implemented most efficiently when managers are empowered to make critical decisions and are held accountable for results. To ensure accountability for NextGen results, several stakeholders suggested that an office was needed that would report directly to the FAA Administrator or the Secretary of Transportation. FAA has made organizational changes in the past in an effort to address these concerns. Beginning in 2011, FAA made additional changes to its NextGen organizational structure to address NextGen leadership issues. Specifically, FAA reorganized the structure of the office responsible for carrying out NextGen implementation, moving the office from within the ATO to under FAA’s Deputy Administrator. According to FAA, this change increased NextGen’s visibility within and outside the agency and created a direct line of authority and responsibility for NextGen. In addition, in February 2012, the FAA reauthorization act designated that the Director of JPDO—who is responsible for NextGen planning and coordination— and created a new leadership report directly to the FAA Administrator position—the Chief NextGen Officer. While these changes indicate a positive step towards addressing accountability issues, FAA continues to work to fill NextGen leadership positions. As of February 2013, FAA had not yet made all the organizational changes called for by the FAA reauthorization act. The Administrator has indicated that the new Deputy Administrator will serve as Chief NextGen Officer and that a search is on for qualified candidates for both the Deputy Administrator and Assistant Administrator of NextGen positions. The Administrator, who was sworn-in to the office in January 2013, has not yet clearly defined the relationship between the JPDO Director and other NextGen officials. Appointing a new Deputy Administrator to also serve as Chief NextGen Officer and concluding its candidate search for the Assistant Administrator of NextGen position, would better position FAA to resolve these remaining leadership challenges. FAA has made some progress developing performance metrics, which we recommended that the agency do in 2010. The NAC recommended in 2011 that FAA adopt performance areas used by ICAO and, as of February 2013, FAA had adopted 6 of the 11 ICAO performance areas.FAA provided information about 5 of these performance areas— Environment, Safety, Efficiency, Capacity, and Cost Effectiveness—and the metrics associated with these areas. Performance metrics for the sixth performance area—Predictability—are being developed. As we have reported in the past, having performance measures is important, because they allow an agency to track its progress in achieving intended results and develop contingency plans if milestones to complete tasks are not met, both of which can be particularly important during the implementation stage of a new program. Performance metrics would also enable stakeholders, such as airlines, to hold FAA accountable for results, as well as to make their own business decisions about whether to invest in equipment needed to enable the use of NextGen technologies and procedures. FAA is currently conducting an agency-wide effort to review and harmonize its performance metrics to bring order, consistency, and accuracy to metrics reporting across its lines of business. The agency began this effort to address several problems, including managing and monitoring an increasing number of metrics and inconsistent metrics names and definitions. Once the harmonization is complete, ATO will create a website to display the harmonized metrics, which, according to FAA officials, will provide information for many FAA activities, including the implementation of NextGen. The ongoing modifications to performance metrics must be completed before FAA can establish baselines from which it can measure progress. Baselines are essential to compare past performance to current performance. For some established metrics for which FAA already has extensive data, establishing a baseline is not expected to be a challenge. By contrast, establishing a baseline for new metrics for which FAA has not yet collected data may present challenges and is expected to take time. FAA is also developing additional NextGen performance metrics in response to the FAA reauthorization act, in addition to responding to the NAC recommendations. FAA was mandated to establish and, beginning in 2013, to track 12 performance metrics to measure progress for implementing some NextGen capabilities and improvements. Although these new reauthorization metrics do not clearly link to the existing NextGen key performance areas mentioned above, some of the reauthorization metrics are similar to and reflect the same information that is already expected to be measured. According to FAA officials, 7 of the reauthorization performance metrics are already established, however the agency faces some challenges in developing 5 of the remaining metrics. For example, FAA is working with the NAC to identify a technically feasible way to measure and report on the amount of fuel used between key city-to-city (city-pair) markets—one of the required new metrics. It is not known at this time if these key city pairs will include some or all locations where midterm NextGen operational improvements are being implemented. FAA has made minimal progress in developing goals for NextGen, which we also recommended in our 2010 report. FAA’s Destination 2025 report establishes cross-cutting agency-wide goals for the midterm, although these are not all related to the implementation of NextGen. Agency officials emphasized that Destination 2025 goals were intended to be aspirational and that FAA business plans, which are developed by each individual business office, would provide NextGen targets and goals. However, agency officials in the NextGen Office acknowledged that individual business offices are still developing their respective targets. When FAA provided information to us in January 2013 about its efforts to align goals and performance metrics, the goals included in Destination 2025 were used as the source for many of the included metrics. As we reported in 2010, having goals and performance measures in place will enable FAA to provide stakeholders, interested parties, Congress, and the American people with a clear picture of where implementation stands at any given time, and whether the technologies, capabilities, and operational improvements that are being implemented are resulting in positive outcomes and improved performance for operators and passengers. Thus, we continue to believe that fully addressing our 2010 recommendations has merit. (See app. II for more information about performance areas and metrics.) FAA has begun to report on implementation progress and benefits at certain airports and metroplexes, as well as for some capabilities, but implementation and benefits information is incomplete. In March 2012, FAA made publically available the NextGen Performance Snapshot website to provide post-implementation performance data. The website is designed to emphasize the link between NextGen investments and benefits. To date, information on the website provides performance progress on the near-term implementation of some, but not all, locations and initiatives where FAA has implemented NextGen capabilities.January 2013, the website had information on established metrics for three performance areas—efficiency, environment, and access. Efficiency is reported at the core 30 airports; environment and access are reported As of at the NAS-wide level. For example, the NextGen Performance Snapshot website reports some efficiency data, such as the average number of minutes that it takes flights to taxi-in and taxi-out at each of the core 30 airports, and environmental data, such as NAS-wide noise exposure data for the U.S. population. In the absence of specific NextGen targets, we looked to track FAA’s progress vis-à-vis the NextGen-related goals in Destination 2025 on FAA’s NextGen Performance Snapshot website, but found it difficult to do so. Information presented was in many cases neither in the same format nor on the same scale as goals in Destination 2025. For example, one goal in Destination 2025 is to improve throughput at core airports during adverse weather by 14 percent through 2018, but the NextGen Performance Snapshot website tracking progress did not include this information either as an average or for individual core airports. Likewise, another Destination 2025 goal is to reduce the amount of fuel burned per miles flown by at least 2 percent annually—which corresponds to international objectives accepted by ICAO—but information provided showed changes in the cumulative amount of fuel burned per kilometers flown. According to agency officials, the NextGen Performance Snapshot website is currently undergoing improvements that will include more meaningful measures and additional reporting levels—such as metroplex and key city-pair views—to more fully demonstrate progress at core airports, prioritized metroplexes, and across the NAS. An updated NextGen Performance Snapshot website has the potential to help stakeholders—such as airports and airlines—and the public understand the progress and some benefits occurring at various airports and metroplexes across the nation in a more systematic way, as well as providing a link between these benefits and the investments made in NextGen by FAA and others. FAA is also developing a PBN assessment tool—the PBN dashboard—to enable FAA to assess the PBN capabilities at individual airports and within the NAS. According to FAA officials and the contractors developing the dashboard, it will be able to measure PBN usage and impacts and changes to conditions (fleet, equipage, etc.). Representatives from one airline we spoke with said that they currently do not collect the information on procedure usage that FAA needs for the dashboard. In response to industry stakeholders’ aforementioned concerns and perceptions that some published PBN procedures have provided limited benefits or have not been sufficiently used, FAA had undertaken some analyses to determine how often published PBN procedures are being used. This analysis of flight track data for airspace around some airports showed that more aircraft were following the airborne routes of published PBN procedures than was being reported by airlines or air-traffic control facilities. However, the analysis was unable to determine whether the aircraft were actually flying the published PBN procedure or merely following the same track on the conventional procedure. FAA officials stated that the dashboard could help FAA better assess the extent to which the fleet is able to use existing procedures. The OAPM study team for South/Central Florida used the dashboard to determine the percentage of operations at each airport in the metroplex that would benefit from proposed new procedures. It is unclear the extent to which the dashboard will be used to measure the impact of improvements or assess progress toward overarching NextGen goals. FAA officials do not plan to use the dashboard to proactively identify additional needed procedures at individual airports or make the dashboard available to external stakeholders, such as airlines, that may want to identify additional needed procedures. RTCA, NextGen Equipage: User Business Case Gaps (Sept. 2011). is currently equipped. For example, RTCA noted that FAA’s long-range implementation plans should provide information on the roll out of RNP procedures at specific airports—the type of information that would be useful for airlines that are considering investing in this technology. However, RTCA found that the plans lacked such information. Nor do FAA implementation plans identify criteria with which additional sites would be selected in the case of demonstration projects. Without greater certainty on when and where NextGen improvements are planned, airlines and others are unlikely to invest in the equipage (and conduct the associated staffing and training) that will help achieve the full benefits of NextGen implementation. FAA has estimated that total industry equipage could cost $6.6 billion—compared to $11.5 billion in NextGen implementation costs for FAA—through 2018. Deciding whether to invest in most of that equipage is at each airline’s discretion. The implementation of NextGen is expected to enhance safety, improve efficiency, and result in a reduction in the environmental costs of aviation. Achieving the benefits of NextGen is a collaborative task that not only relies on timely and reliable information on progress implementing NextGen, but also depends heavily on airlines’ and other stakeholders’ continued or increased investments in NextGen technology and training. The improvements included in NextGen plans are often interrelated, with benefits in one area dependent on the full implementation of other operational improvements. FAA does not have a system for systematically tracking the use of existing PBN procedures. As a result, FAA is unable to assure that investment in these routes is worthwhile or that they justify the cost to develop and maintain them. In the absence of data on the use of existing PBN routes, airlines and other stakeholders remain unconvinced that the investments needed for the full implementation of NextGen will be justified. Such information could help the agency demonstrate the value of PBN technologies and any resulting benefits, as well as allow the agency to identify routes that need to be revised to increase their use. Without a process for proactively identifying new PBN procedures based on NextGen goals and targets, requests for new PBN procedures largely originate from outside FAA. While the agency has attempted to work with industry stakeholders, such as airlines in the 20/20 effort, to identify needed routes, results have been mixed. The use of criteria to proactively identify needed routes at individual airports, such as criteria used by the NAC to prioritize metroplexes, could enable FAA to identify routes that can maximize benefits for the NAS. Furthermore, FAA does not assess requests received to determine whether the requested route or type of procedure (e.g., RNAV or RNP) maximizes potential benefits. Since requestors, such as airlines, may have their own reasons for requesting routes at certain locations or using specific technologies, their requests may not correspond with NextGen goals or result in the most efficient use of resources for PBN implementation or vis-à-vis the needs of other users. The NAC work group recommended that FAA develop an integrated approach to increase airspace efficiency in key metroplexes, including OAPM sites. While FAA has consistently emphasized the importance of integrating key operational improvements to maximize NextGen benefits, FAA has primarily focused its midterm efforts on PBN and has not systematically integrated airborne- and surface traffic management and revised standards into these efforts. FAA officials explained that non-PBN improvements were not systematically included in the first round of OAPM, in part, to achieve OAPM time frames. However, with the implementation of some first round improvements, as well as progress made in developing and deploying some non-PBN improvements, FAA is better positioned to systematically integrate PBN and other improvements going forward. Insufficient integration of key improvements decreases midterm NextGen benefits, since these benefits are interdependent. Furthermore, by not including the identification of unused flight routes for decommissioning in OAPM and similar efforts, FAA could be missing an opportunity to leverage the expertise of participating stakeholders. Decommissioning unused or little-used conventional, non-PBN procedures could allow FAA to make better use of its resources by reducing maintenance costs. FAA has made progress in recent years in ensuring the inclusion of stakeholders in NextGen efforts, especially air traffic controllers. Some airport officials, however, expressed concern that FAA had not fully involved them in current efforts or involved them too late in the process, although a representative with ACI-NA noted that FAA has recently begun to involve airports more significantly in NextGen design and implementation efforts. However, FAA has not developed guidelines for the timely and consistent inclusion of these stakeholders. Some FAA officials told us that they had not fully appreciated the potential value that airport officials could provide. A collaborative approach that timely involves key stakeholders—including the agency, airport officials, air traffic controllers, and airlines—enables FAA to fully leverage the expertise of these stakeholders, helps identify the best possible solutions, and facilitates the implementation of those improvements. FAA has made some progress in developing and aligning performance metrics and goals since we recommended these actions in 2010. It is important for FAA to complete this work to measure progress and demonstrate benefits across the NAS, gain confidence, and engender needed investments to support the full implementation of NextGen. Furthermore, RTCA and others have pointed to the importance of having stable, long-term implementation plans for NextGen capabilities and determining specific location benefits and implementation dates, but FAA’s NextGen implementation plans do not detail such deployment information. As a consequence, airlines and other stakeholders have been reluctant to invest in expensive avionics, including RNP equipage. To help ensure that NextGen operational improvements are fully implemented in the midterm, we recommend that the Secretary of Transportation direct the FAA Administrator to take the following five actions: work with airlines and other users to develop and implement a system to systematically track the use of existing PBN procedures; develop processes to proactively identify new PBN procedures for the NAS, based on NextGen goals and targets, and evaluate external requests so that FAA can select appropriate solutions; require consideration of other key operational improvements in planning for NextGen improvements, including PBN projects at metroplexes such as OAPM, as well as the identification of unused flight routes for decommissioning; develop and implement guidelines for ensuring timely inclusion of appropriate stakeholders, including airport representatives, in the planning and implementation of NextGen improvement efforts; and assure that NextGen planning documents provide stakeholders information on how and when operational improvements are expected to achieve NextGen goals and targets. We provided the Department of Transportation (DOT) with a draft of this report for review and comment. DOT responded by email and did not agree or disagree with our recommendations, but provided technical clarifications, which we incorporated into the report as appropriate. We are sending copies to the appropriate congressional committees, the Secretary of Transportation, and interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me on (202) 512-2834 or at dillinghamg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Our objective was to assess the Federal Aviation Administration’s (FAA) progress implementing key Next Generation Air Transportation System (NextGen) operational improvements in the midterm and demonstrating benefits from these improvements. To do so, we addressed the following questions: 1) What key operational improvements is FAA pursuing to deliver NextGen benefits with existing technologies through 2018? 2) To what extent is FAA addressing known obstacles to the implementation and usage of NextGen operational improvements? 3) To what extent is FAA measuring and demonstrating midterm NextGen benefits and assessing outcomes? To address these three questions, we reviewed our prior reports, met with FAA officials with a role in implementing NextGen, including units within the NextGen Office, the Office of Aviation Safety, the Air Traffic Organization (ATO), Aeronautical Navigation Products (AeroNav Products), and the Office of Environment and Energy. We reviewed FAA planning documents for NextGen, including the 2012 NextGen Implementation Plan and work plans for individual lines of business within FAA, as well as FAA reports and briefings related to ongoing NextGen efforts, including the Optimization of Airspace and Procedures in the Metroplex (OAPM) initiative, and FAA process and procedure documentation. We also interviewed aviation stakeholders and experts with knowledge and experience related to NextGen implementation: representatives from industry associations, including RTCA, Airlines for America, and the Airports Council International–North America; airlines, including Alaska Airlines and Southwest Airlines, which have both advocated for the increased use of Performance Based Navigation (PBN) procedures; airports involved in OAPM efforts in North Texas and Southern California, in the Greener Skies over Seattle (Greener Skies) initiative, and in surface improvement efforts for airports in New York and New Jersey; avionics and aircraft manufacturers and other aviation vendors, including Boeing, Honeywell, and Raytheon; and air traffic controllers with the National Air Traffic Controllers Association (NATCA) and at individual air-traffic control facilities, including facilities involved in the OAPM effort in North Texas and the Greener Skies initiative. To assess the status of FAA’s implementation of key operational improvements and the potential benefits to be achieved, and identify challenges to the full implementation of those key operational improvements, we assessed FAA implementation progress for operational improvements that were recommended by RTCA’s Midterm Implementation Task Force (RTCA task force) in 2009 and those that were prioritized by the Integrated Working Capabilities Work Group of the NextGen Advisory Council (NAC work group) in 2012. (See table 1.) RTCA’s recommendations are the basis for a number of FAA’s policy, program, and regulatory decisions, and have been incorporated into FAA’s current NextGen implementation plans. Likewise, the NAC—which includes representatives from industry and FAA’s senior leadership— advises FAA on the implementation of NextGen. The recommendations made by the RTCA task force and NAC work group represent consensus views in the aviation community regarding which operational improvements FAA should prioritize and where those improvements should be implemented in the midterm—through 2018—but do not include all operational improvements in FAA’s implementation plans. They are limited to those improvements that use existing technologies. We grouped these operational improvements into three key improvement areas for midterm NextGen implementation: 1. Performance Based Navigation (PBN), 2. enhanced airborne and surface traffic management, and 3. additional or revised aviation safety standards. Table 1 provides a listing of the operational improvements recommended by the RTCA task force and NAC work group. Operational improvements are grouped by the implementation portfolios used by FAA in its planning documents. To determine how FAA is addressing known obstacles to the implementation of NextGen operational improvements, we identified obstacles and challenges to developing, implementing, or fully using key NextGen improvements primarily from findings and recommendations made by the RTCA task force and an FAA study on obstacles to PBN implementation. To obtain information about FAA efforts to address these obstacles, we reviewed agency reports and documents, including FAA’s report on efforts to streamline the process for developing and implementing flight procedures, and spoke with officials from relevant program offices and facilities, including environmental review specialists and air-traffic control facilities. To assess agency progress toward addressing these obstacles and identify ongoing challenges, we spoke with industry experts and stakeholders, including airport officials, airline representatives, avionics manufacturers, members of the NAC work group and the Performance Based Operations Aviation Rulemaking Committee, and air traffic controllers. We also assessed certain FAA efforts against established criteria, including best practices for stakeholder involvement from the Airport Cooperative Research Program (ACRP) and for organizational goal-setting and performance measurement. See GAO, NextGen Air Transportation System: FAA’s Metrics Can Be Used to Report on Status of Individual Programs, but Not of Overall NextGen Implementation or Outcomes, GAO-10-629 (Washington, D.C: July 27, 2010). This report assessed FAA’s progress in developing performance goals and metrics against criteria established by GAO, Tax Administration: IRS Needs to Further Refine Its Tax Filing Season Performance Measures, GAO-03-143 (Washington, D.C.: Nov. 22, 2002). and Outreach Office. We also interviewed NAC officials. To evaluate the consistency and meaningful output that would be provided by the NextGen key performance areas, metrics, and measures, we compared and analyzed information that was provided in FAA agency-wide reports and metrics documentation, the NextGen Performance Snapshot website, and the FAA reauthorization act. We reviewed FAA reports, NextGen business case documentation, and the publicly available information on NextGen implementation and expected benefits. We also interviewed industry stakeholders, including representatives from airports, airlines, and equipment manufacturers to assess the extent to which available information builds confidence and buy-in toward full NextGen implementation. Finally, we compared available information with best practices for performance plans. We conducted this performance audit from November 2011 through April 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following table provides the description of FAA performance areas and metrics. Although some of the performance metrics contained in this table have been established, FAA may continue to refine these metrics to ensure that the measurements align with agency targets and goals. FAA also has a few new metrics under development, and the agency is working to identify a technically feasible way to implement them. In addition to the individual named above, Ed Laughlin, Assistant Director; Russ Burnett; Jessica Bryant-Bertail; Aisha Cabrer; Tim Guinane; Bert Japikse; Delwen Jones; Molly Laster; and Josh Ormond made key contributions to this report.
FAA, collaborating with other federal agencies and the aviation industry, is implementing NextGen, an advanced technology air-traffic management system that FAA anticipates will replace the current ground-radar-based system. At an expected cost of $18 billion through 2018, NextGen is expected to enhance safety, increase capacity, and reduce congestion in the national airspace system. To deliver some of these benefits in the midterm, FAA is implementing operational improvements using available technologies. Delivering midterm benefits could build support for future industry investments, but a task force identified obstacles, such as FAA's lengthy approval processes. GAO was asked to review FAA's midterm NextGen efforts. GAO examined (1) key operational improvements FAA is pursuing through 2018, (2) the extent to which FAA is addressing known obstacles to the implementation of NextGen operational improvements, and (3) the extent to which FAA is measuring and demonstrating midterm benefits. GAO reviewed FAA documents, as well as the task force's recommendations to FAA, and interviewed FAA and airport officials and aviation experts. The Federal Aviation Administration (FAA) is pursuing key operational improvements to implement the Next Generation Air Transportation System (NextGen) in the "midterm," which is 2013 through 2018. These improvements focus on establishing Performance Based Navigation (PBN) procedures at key airports, but benefits could be limited in the midterm. PBN uses satellite-based guidance to improve air-traffic control routes (known as "procedures"). These procedures can deliver benefits to airlines, such as fuel savings and increased efficiency, particularly in congested airspace. To deliver benefits more quickly, FAA made trade-offs in selecting sites and in the scope of proposed improvements. For example, FAA is not implementing procedures that will trigger lengthy environmental reviews. These trade-offs, with which airlines and other stakeholders generally agree, will likely limit benefits from these PBN initiatives early in the midterm. FAA has also made some progress in other key operational improvement areas, such as upgrading traffic management systems and revising standards to improve aircraft flow in congested airspace. However, FAA has not fully integrated implementation of all of its operational improvement efforts at airports. Because of the interdependency of improvements, their limited integration could also limit benefits in the midterm. FAA has efforts under way to help overcome overarching obstacles to NextGen implementation identified by an advisory task force, but challenges remain, and many of these efforts are scheduled to take a number of years. FAA efforts include, for example, a new process for focused and concise environmental reviews for some proposed actions (e.g., new procedures), where a detailed analysis of the environmental impacts is limited to only those categories involving potentially significant impacts, such as increased noise or emissions. Some of these efforts do not, however, fully address previously identified obstacles. FAA has not fully addressed obstacles to selecting new PBN procedures that will best relieve congestion and improve efficiency, for example. FAA continues to rely on requests for new procedures from airlines and other stakeholders. This reliance may or may not result in procedures that maximize benefits to the national airspace system. Not addressing remaining challenges could delay NextGen implementation and limit potential benefits. FAA has made progress in developing NextGen performance metrics, but according to key stakeholders, FAA currently provides limited data to demonstrate its progress in implementing midterm improvements and the associated benefits. FAA is in the process of harmonizing performance metrics across all agency programs to ensure that metrics align with agency targets and goals. However, information is incomplete on the midterm improvements and their benefits at selected airports, and airlines and others lack access to needed information to make fully informed investment decisions. FAA has developed a website to report on NextGen implementation, but published information is not fully tied to FAA's implementation goals. FAA's plans also provide limited information about future implementation, such as locations and expected benefits. Better performance and planning information would provide airlines with a stronger basis for making decisions to invest an estimated $6.6 billion on NextGen technology through 2018. FAA should, among other things, better integrate NextGen efforts; develop processes for selecting new PBN procedures; and ensure that stakeholders have needed information on NextGen progress to facilitate investment decisions. DOT did not agree or disagree with GAO's recommendations, but provided technical comments.
Within USDA, FNS has overall responsibility for overseeing the school- meals programs, which includes promulgating regulations to implement authorizing legislation setting nationwide eligibility criteria and issuing guidance. School-meals programs are administered at the state level by a designated state agency that issues policy guidance and other instructions to school districts providing the meals to ensure awareness of federal and state requirements. School districts are responsible for completing application, certification, and verification activities for the school-meals programs, and for providing children with nutritionally balanced meals each school day. The designated state agency conducts periodic reviews of the school districts to determine whether the program requirements are being met. Schools and households that participate in free or reduced-price meal programs may be eligible for additional federal and state benefits. Appendix II discusses those benefits. A graphic depicting the responsibilities of FNS, state agencies, and school districts can be found in appendix III. Children from families with incomes at or below 130 percent of the federal poverty level are eligible for free meals. Those with incomes between 130 percent and 185 percent of the federal poverty level are eligible for reduced‐price meals. Income is any money received on a recurring basis—including, but not limited to, gross earnings from work, welfare, child support, alimony, retirement, and disability benefits—unless specifically excluded by statute. Table 1 below shows the annual income-eligibility guidelines in effect for a family of four during the 2010- 2011 through the 2013-2014 school years. Children from families with incomes over 185 percent of the federal poverty level pay full price, though their meals are still subsidized to some extent. In addition, students who are in households receiving benefits under certain public-assistance programs—specifically, SNAP, Temporary Assistance for Needy Families (TANF), or Food Distribution Program on Indian Reservations (FDPIR)—or meet certain approved designations (such as students who are designated as homeless, runaway, or migrant; a foster child) are eligible for free school meals regardless of income. School districts certify students into the school-meals programs using one of two methods—either through (1) a household application identifying household information such as income and household size, or information on participation in public-assistance programs or (2) direct certification. During the 2012-2013 school year, 11.7 million students were certified for free or reduced-price meals through a household application, and 12.3 million students were directly certified. Once a child is certified into the school-meals program, the eligibility determination is in effect for the entire year; households are not required to inform the school if wages rise above the income-eligibility guidelines during the school year. Household application. Under the household-application method, a household submits an application provided by the school district. Since a household application is used for all members in the household, a single application can list multiple students. Schools send school-meals applications home at the beginning of each school year, but household applicants may apply at any time during the course of the year. Online applications are also available in some school districts. The applicant lists all sources of household income, the frequency with which it is received, and the names of all household members, among other information. One adult from the household signs the application, certifying that the information provided is correct. No supporting documentation—such as tax returns or pay stubs—is required at the time of application. In accordance with USDA guidance, school districts are not to conduct any actions to verify the information on the application during the certification process; they must accept the applications at face value and determine eligibility based on the information voluntarily disclosed in the application. In addition, students who are in households receiving benefits under certain public-assistance programs, including SNAP or TANF, or meet an approved designation—regardless of income—are categorically eligible for free school meals. For example, students who are designated as (1) homeless, runaway, or migrant; (2) a foster child; or (3) enrolled in a federally funded Head Start Program are categorically eligible for free meals. These households must state the reason for their categorical eligibility on the application along with any applicable public-assistance identification numbers. FNS officials told us that school district officials have a responsibility to verify homeless, runaway, and migrant applications as part of the application approval process. Figure 1 below is an example of a school-meals application and information required to be deemed categorically eligible. Appendix IV provides a sample school-meals household application. The school district reviews data on the application, such as household size, income, or participation in an approved public-assistance program or other approved designation, and makes an eligibility determination. Starting with the 2011-2012 school year, applicants are required to provide the last four digits of their Social Security numbers rather than the entire nine-digit number. Direct Certification. Children in households that receive certain public-assistance benefits—SNAP, FDPIR, or TANF—are automatically eligible for free school meals through “direct certification.” Under the direct-certification method, school districts certify children who are members of households receiving public assistance as eligible for free school meals based on information provided by the state or local agency administering those programs. Starting in the 2008-2009 school year, school districts were required to directly certify SNAP households into the school-meals programs. A student or household that meets an approved designation—such as homeless or foster children—can also be directly certified into the school-meals programs without having to complete a household application. Figure 2 below describes the household-application and direct- certification methods that households use to become certified for free or reduced-price meals. After school districts certify household eligibility for school-meals program benefits, they must annually verify a sample of household applications approved for free or reduced-price school-meals benefits to determine whether the household has been certified to receive the correct level of benefits—we refer to this process as “standard verification.” As dictated by statute, school districts are required to verify a random sample of applicants. The sample size is equal to the lesser of 3 percent of approved applications, selected from error-prone applications, or 3,000 error-prone applications unless an alternative sample size is used. For the purposes of standard verification, the NSLA defines error-prone applications as certified applications with monthly income within $100 of—or with annual income within $1,200 of—the income eligibility limits for free or reduced-price meals. Households that indicate categorical eligibility on an application and households that enter the program through direct certification are generally not subject to the standard verification process. Further, as described in USDA’s eligibility manual for school meals, school districts are obligated to verify additional applications if they deem them to be questionable, which is referred to as for-cause verification. Verification—whether standard or for-cause—is conducted only for those beneficiaries receiving benefits through the household-application process; directly certified households are not subject to verification. Households selected for verification (standard or for-cause) must submit supporting documentation—such as pay stubs, benefit award letters from state agencies for benefits such as Social Security, or supplemental security income, or support payment decrees from courts—to the school district, or be removed from the program. The school district reviews the information, determines whether the household’s free or reduced-price status is correct, and makes corrections, as necessary. The Improper Payments Information Act of 2002, as amended, requires agencies to identify, measure, prevent, and report their improper payment amounts and to develop and implement improper payment reduction plans, among other things. For fiscal year 2013, USDA reported that the NSLP and SBP had estimated improper payment rates of approximately 15.7 percent and 25.3 percent, respectively—equating to about $1.8 billion and $831 million.payments in the NSLP and SBP attributable to certification errors versus counting and claiming errors, as reported for fiscal year 2013. To address the high improper-payment rates in the school-meals programs, among other actions, USDA worked with Congress to develop the Child Nutrition and WIC Reauthorization Act of 2004 (CNR).required school districts to directly certify students that receive SNAP benefits for free meals in all school districts by the 2008-2009 school year. USDA officials told us that they are emphasizing the use of direct certification, because, in their opinion, it helps prevent certification errors without compromising access. School-meals programs and SNAP have CNR similar income-eligibility limits.SNAP is more detailed than the certification process under the NSLP. Direct certification has reduced the administrative burden on SNAP households, as they do not need to submit a separate school-meals application. It also reduces the number of applications school districts must review. In commenting on a draft of this report, USDA reiterated that school districts do not have access to SNAP eligibility documents, are not required to review household SNAP applications, and therefore accept the SNAP eligibility determination at face value. FNS officials also told us that once school districts receive confirmation that a household is eligible for direct certification, they are not required to determine whether the household is still eligible throughout the year. Further, the application process for To test the effectiveness of direct certification in identifying and preventing ineligible participants from receiving benefits, we reviewed a nongeneralizable sample of 23 households who were directly certified for free-meal benefits and found two cases where the household appeared ineligible for SNAP benefits, and therefore may have been inappropriately directly certified into the school-meals programs, as described below. Because these households were directly certified for school-meals benefits, the school district would not be aware of the SNAP error unless notified by the appropriate state agency. One household received SNAP benefits from October 2009 to October 2010. However, one household member started employment in March 2010 and, based upon his biweekly pay of approximately $3,300, his household of four members would no longer have qualified for SNAP benefits. The SNAP Notice of Food Benefit Extension sent to the household dated March 7, 2010, required notification of changes to job status and rate within 10 days beginning in May 2010. Based on the change of wages in March 2010, this household would have not have remained eligible for SNAP benefits and thus would not have been eligible for direct certification for free school meals during the 2010-2011 school year. One household’s SNAP application incorrectly omitted a member of the household who earned income and provided financial support. Had the SNAP application included this household member’s income, the household would not have qualified for SNAP benefits; therefore, this household should not have been directly certified for free school meals during the 2010-2011 school year. We will include these instances in our referrals to USDA and the state agency administering SNAP for appropriate action, as warranted. Since passage of the CNR, the number of school districts directly certifying SNAP-participant children has continued to increase. For example, during the 2008-2009 school year, 78 percent of school districts directly certified students, and by the 2012-2013 school year, this percentage had grown to 91 percent of school districts, bringing the estimated percentage of SNAP-participant children directly certified for free school meals to 89 percent. In all states, the combined income eligibility limit for Medicaid exceeds the NSLP income eligibility limit of 130 percent of the federal poverty guideline. school year, and more are expected to participate during the 2014-2015 school year. USDA requires administering state agencies to conduct regular, on-site reviews—referred to as administrative reviews—to evaluate school districts that participate in the school-meals programs. Hunger-Free Kids Act of 2010 increased the frequency of these reviews from every 5 years to every 3 years. Starting with the 2013-2014 school year, state agencies are required to conduct administrative reviews at least once during a 3-year review cycle, with no more than 4 years between the reviews. During this process, state agencies are to determine whether free, reduced-price, and paid lunches were properly provided to eligible students; and that meals are counted, recorded, consolidated, and reported through a system that consistently yields correct claims. 7 C.F.R. § 210.18, Prior to the 2013-2014 school year, these reviews were referred to as Coordinated Review Efforts. In commenting on a draft of this report, USDA clarified that administrative reviews include off-site procedures where state agencies evaluate the school district’s system for making eligibility determinations including direct certification. lunch, including nutrition and portion requirements—as well as the process of counting and recording meals. School districts that have administrative review findings are to submit a corrective-action plan to the state agency, and the state agency is to follow up to determine whether the issue has been resolved. USDA regulations require all state agencies to report the results of administrative reviews to FNS by March 1 of each FNS officials told us that as part of their oversight of state school year.agencies, they confirm that agencies have completed the administrative reviews. We reviewed administrative review reports from the 25 school districts we selected that were completed between February 2008 and December 2012. Administrative review reports from 11 school districts cited some incorrect eligibility determinations. Incorrect eligibility determinations ranged from 1 to 15 per district—based on the stated information on the application. The number of incorrect determinations found in each school district was small compared to the number of applications reviewed, which ranged from 687 to 8,398. As required, these 11 school districts submitted a corrective-action plan to the state addressing how they would ensure that all meal-benefit applications are reviewed and certified based on eligibility guidelines. The state agency determines if the school district’s corrective action satisfactorily resolves the problem; the state agency cannot close the review until all identified issues have been addressed. The administrative review reports from the remaining 14 school districts in our sample did not cite any incorrect eligibility determinations. In commenting on a draft of this report, USDA told us that it makes grant funds available annually to states to fund the performance of additional administrative reviews, oversight, and training for school districts with a high level or risk of administrative errors. USDA stated that since fiscal year 2005, $4 million has been set aside annually for these grants. According to USDA, from fiscal year 2005 to 2013, FNS awarded 60 grants totaling $26.5 million. As discussed earlier in this report, school districts are obligated to verify the eligibility of applicants whose application information is deemed questionable under the “for-cause” verification process. Examples of relevant recent cases include the following: The Chicago Board of Education OIG reported that in fiscal year 2012, a cohort of highly paid and high-level Chicago Public Schools administrators falsified information on school-meals applications and the office noted the possibility of system-wide school-meals fraud. Specifically, the report cited 21 principals, assistant principals, and recently promoted assistant principals who understated their own income or falsified the number of household members, including leaving themselves off the applications. In July 2013, the State of New Jersey Office of the State Comptroller issued a report on fraudulent school-lunch program applications filed by public employees. The report, reviewing a sample of schools that received more than $1 million in reimbursements for school lunches in the 2010-11 school year, found a number of public employees who materially underreported their household income on school-lunch applications, including 101 public employees (elected school-board members and school-district employees among them) who provided materially false information. Further, according to the July 2013 report, numerous applicants substantially underreported the income of household members and many failed to list income-generating household members on their applications. In February 2012, USDA distributed guidance to state administrators to clarify that school districts have the authority to review approved applications for free or reduced-price meals for school-district employees when known or available information indicates school-district employees may have misrepresented their incomes on their applications. However, this for-cause verification should be used selectively and not to verify the household income of all school district employees whose children are certified for free or reduced-price meals. Under the guidance, school districts can identify children of school-district employees and use salary information available to them to identify questionable applications and then conduct for-cause verification on the questionable applications, if necessary. In August 2012, USDA also updated its school-meals eligibility manual—used by school districts to determine and verify eligibility—with this guidance. Our analysis of this guidance is presented below. As discussed earlier in this report, USDA regulations require that school districts conduct for-cause verification of all questionable applications.Officials from 11 of the 25 school districts told us during our interviews that they conduct for-cause verification. These officials provided examples of how they would identify suspicious applications, such as when a household submits a modified application—changing income or the household members—after being denied or when different households include identical public-assistance benefit numbers (e.g., if different households provide identical SNAP numbers). However, officials from 9 of the 25 school districts told us that they did not conduct any for- cause verification. For example, one school-district official explained that the school district accepts applications at face value. An official from another school district said that his district does not conduct for-cause verification and added that he is not sure how to identify questionable applications. Additionally, officials from 5 of the 25 school districts told us they only conduct for-cause verification if someone (such as a member of the public or a state agency) informs them of the need to do so on a household. Although not generalizable, responses from these school districts provide insights about if and under what conditions for-cause verifications are conducted. USDA officials stated that school districts have the obligation to conduct for-cause verification if they suspect inaccurate information, but added that staff may be hesitant to perform it because of the potential work burden it may create. USDA officials also told us that some school districts may be reluctant to conduct for-cause verification because of concerns about appearing to target certain groups of people. In April 2013, USDA issued a memorandum stating that effective for the 2013- 2014 school year, all school districts must specifically report the total number of applications that were verified for cause. Prior to this, USDA did not collect any information on applications that have undergone for- cause verification. USDA officials told us that they will use the information to determine the frequency with which school districts conduct for-cause verification. This information is to be provided to USDA in April 2014; however, since this is the first year the information is being collected, it may take school districts additional time to finalize the reports. While school districts are to report the number of applications verified for cause, the outcomes of those verifications will be grouped with the outcomes of applications that have undergone standard verification. As a result, USDA plans to review the results to determine the frequency with which school districts conduct for-cause verification but will not have information on specific outcomes, which it may need to assess the effectiveness of for- cause verifications and to determine what actions, if any, are needed to improve program integrity. During our review of 25 households that applied for and received school- meals benefits, we identified one household that reapplied for school- meals benefits during the 2011-2012 school year less than 2 weeks after being denied benefits for not meeting the eligibility requirements. The new application removed a source of income—child support—and the household was approved for reduced-price meals. When we interviewed the applicant, she said that she could not remember if she received child support payments at the time she resubmitted the application. This household also applied for school-meal benefits during the 2012-2013 school year. The application did not indicate child-support payments, and the household was subsequently approved for reduced-price meals. This household was not subjected to for-cause verification by the school district even though households resubmitting an application with less income a short time after being denied benefits could be a red flag to indicate that for-cause verification should be conducted. While USDA has issued guidance specific to school-district employees and instructs school districts to verify questionable applications in its school-meals eligibility manual, we found that the guidance does not provide possible indicators or describe scenarios that could assist school districts in identifying questionable applications. Standards for Internal Control in the Federal Government call for agencies to design control Reviewing activities to ensure management’s directives are carried out. the data gathered on for-cause verification for the 2013-2014 school year could help USDA determine if data on the outcome of for-cause verifications should be reported separately from standard verification results. Further, as noted above, evaluating this data could help USDA determine whether additional guidance would be beneficial to assist school districts in identifying applications that should be subject to for- cause verification. Such guidance could include criteria and examples of possible indicators of questionable or ineligible applications. USDA’s standard verification process—the terms of which are statutorily defined—makes it difficult to detect all households that misreport their income and that are ineligible for program benefits. It could also result in the removal of eligible beneficiaries, as households that do not respond to the verification notice are removed from the program. Electronically matching household-application information to other data sources—such as state income databases or public-assistance databases—could hold promise in identifying high-income households for validation while not disrupting program benefits to eligible households. GAO/AIMD-00-21.3.1. As described earlier in this report, with the exception of for-cause verification, standard verification is generally limited to approved applications considered “error-prone.” Error-prone is statutorily defined as approved applications where stated income is within $100 of the monthly or $1,200 of the annual applicable income-eligibility guideline. Households with reported incomes that are more than $1,200 above or below the free-meals eligibility threshold and more than $1,200 below the reduced-price threshold would generally not be subject to this verification Figure 3 shows the income thresholds of applicants that would process.and would not be considered error-prone for a four-person household during the 2010-2011 school year. In addition to the nongeneralizable sample of 23 households receiving school-meal benefits through direct certification discussed in the previous section, we reviewed a nongeneralizable sample of 25 households receiving school-meals benefits through an approved application. Nineteen household applications were certified based upon their stated Of these, we determined that 9 were not income and household size.eligible for free or reduced-price-meal benefits they were receiving because their income exceeded eligibility guidelines. Further, 2 of these 9 households stated annualized incomes that were within $1,200 of the eligibility guidelines. These two households could have been subject to standard verification had they been selected as part of the sample by the district; however, they were not selected or verified. The remaining 7 of 9 households stated annualized incomes that did not fall within $1,200 of the eligibility guidelines and thus would not have been subject to standard verification. Figure 4 shows the results of our review. Of the 19 households shown above that indicated eligibility based on self- reported household size and income, we determined that 9 were not eligible for free or reduced-price-meal benefits they were receiving because their known income exceeded eligibility guidelines. For example, one household we reviewed submitted a school-meals application for the 2010-2011 school year seeking school-meals benefits for two children. The household stated an annual income of approximately $26,000 per year, and the school district appropriately certified the household to receive reduced-price-meal benefits based on the information on the application. However, through review of the payroll records, we determined that the adult applicant’s income at the time of the application was approximately $52,000—making the household ineligible for benefits. This household also applied for and received reduced-meal benefits for the 2011-2012 and 2012-2013 school years by understating its income. Its 2012-2013 annualized income was understated by about $45,000. Because the income stated on the application during these school years was not within $1,200 per year of the income-eligibility requirements, the application was not deemed error-prone and was not subject to standard verification. Had this application been subjected to verification, a valid pay stub would have indicated the household was ineligible. We interviewed the adult applicant as part of our investigation, and the applicant admitted to underestimating her income. Another household in our sample submitted a school-meals application for the 2010-2011 school year—stating an income that equated to approximately $32,500 annually and a household size of five members— and was approved for free-meal benefits. However, at the time of the application, the household’s annualized income was at least $60,000, making the household ineligible for free or reduced-price meals. The household application stated an annualized income that put it within the error-prone range; however, it was not among the 3 percent sample of error-prone applications selected for verification. This household applied for school-meals benefits for the 2011-2012 school year—stating an annualized income that equates to approximately $39,600—and was approved for reduced-price meals. However, based on our review of payroll information, household income was at least $73,000 during 2011—a difference of about $33,000—making this household ineligible for free or reduced-price meals. When interviewed, the applicant said that her children completed the application and that she signed it. In another instance, a household submitted a school-meals application for the 2010-2011 school year—stating an annualized income that equates to approximately $19,200 and a household size of four—and was approved for free school-meals benefits. This application omitted a parent living in the household and earning annualized income of approximately $55,000. Had the wage-earner and his income been included, this household would not have qualified for free or reduced-price meals. This household applied for and was approved for free school-meals for the 2011-2012 and 2012-2013 school years. Again, these applications omitted the parent and his wages—which amounted to approximately $62,000 during 2011 and $64,000 during 2012. Had his income been included, the household would not have qualified for free or reduced-price meals. When interviewed, the parent said that he was not aware that his children had been receiving free school-meals benefits. Because the stated income on the application was outside the error-prone range, and the school district only verified error-prone applications during these school years, this household would not have been subject to standard verification. Individuals with knowledge of the program-eligibility guidelines could understate their income to avoid scrutiny, as this would likely prevent the application from being reviewed under standard verification, although for- cause verification could identify the understatement. For fiscal year 2013, USDA reported NSLP and SBP certification errors of approximately 8.8 percent and 9.5 percent as part of its improper payment estimation. As explained previously, USDA OIG noted that these estimates may be unreliable because they were based on the 2005-2006 school year and confidence levels could not be provided for subsequent years. FNS has hired a contractor to conduct a revised study for the 2012-2013 school year, which is expected to be complete in November 2014. Once a household application has been certified as eligible to receive benefits, and if it is selected for verification, school districts obtain supporting documentation from the applicant—such as pay stubs or benefit-award letters—in order to evaluate whether or not the household’s free or reduced-price status is correct. verification process relies on responses from applicants, it could lead to eligible children being removed from the program if the applicant does not respond to the school district’s verification request. USDA told us that during the 2012-2013 school year, school districts verified approximately Of these, 43.5 percent were receiving the correct 203,200 applications.level of benefits, and approximately 23.6 percent had their level of benefits adjusted to properly reflect their eligibility based on verification. Of the applications selected for verification, 32.8 percent did not respond and were excluded from receiving free or reduced-price school meals. According to USDA guidance, school districts are not to conduct any actions to verify the information on the application during the certification process; they must accept the applications at face value and determine eligibility based on the information voluntarily disclosed in the application. In 2004, USDA issued the results of a pilot study to determine the effects of requiring documentation to households applying for benefits and reported that it had the adverse effect of limiting access to students eligible to receive school-meals benefits. Mathematica Policy Research, Inc., Evaluation of the National School Lunch Program Application/Verification Pilot Projects, vol. 1: “Impacts on Deterrence, Barriers, and Accuracy” (Princeton, N.J.: February 2004). This report was prepared for USDA. not understand the instructions, or may be hesitant to provide income information, though the household is still eligible to receive the benefits. Further, a study commissioned by the USDA to examine outcomes of the verification process during the fall of 2002 found that approximately half of the households that did not respond to the verification request were eligible for free or reduced-price meals. As described above, standard verification is generally limited to approved applications where stated income is within $1,200 of the annual applicable income-eligibility guideline amount. Applications with stated income outside of these thresholds would generally not be subject to standard verification. However, our review of a nongeneralizable sample of 25 households found 9 applications that were ineligible for benefits, 7 of which would have been excluded from standard verification. Standards for Internal Control in the Federal Government indicate that internal controls should include control activities and risk assessments. These, among other controls, should be effective and efficient in enforcing program requirements. Independent verification is a key detection and monitoring component of an agency’s fraud-prevention One method to identify framework and is a fraud-control best practice.potentially ineligible applicants and effectively enforce program-eligibility requirements is through the independent verification of income information with an external source, such as state payroll data. States or school districts, through data matching, could identify households that have income greater than the eligibility limits for further follow-up. This risk-based approach would allow school districts to focus on potentially ineligible families, while not interrupting program access to other participants. While electronic verification could yield positive results, there are some potential limitations. For example, state income databases may not contain all sources of household income—such as child-support payments or income earned by individuals who do not have a Social Security number. Additionally, it may not be cost-effective or possible for school districts to use external data when conducting verification. Thus, states may be better positioned to complete this matching and to report findings to specific school districts. A study commissioned by USDA to explore the feasibility of computer matching in the NSLP during the 2004-2005 school year cited limitations to having school districts directly verify income information with state agencies. For example, because income data are reported for individuals, not households, school districts would need to obtain Social Security numbers for all income earners in the household in order to verify household income. The study also found that computer-matching results can be inaccurate and that income discrepancies between the state database and the household application would require follow-up with the household that is similar to the existing verification process. However, technology and data-matching software and techniques have improved significantly in the last decade and could hold promise in efficiently identifying only potentially ineligible households for further follow-up while not removing program beneficiaries whose incomes are within the eligibility guidelines. Electronic verification of a sample of applicants (beyond those that are statutorily defined as error-prone) through computer matching by school districts or state agencies with other sources of information—such as state income databases or public-assistance databases—could help effectively identify potentially ineligible applicants. However, it is not clear whether such a process is cost-effective. Thus, developing a pilot to explore the feasibility of implementing a cost-effective mechanism to conduct electronic verification at the state or school-district level could help inform the extent to which this alternative is feasible. Because standard verification is dictated by statute, if the results of the pilot show promise in identifying ineligible beneficiaries, developing a legislative proposal to expand the verification process to include independent electronic verification for a sample of all school-meals applications could help USDA identify and prevent ineligible beneficiaries in the school- meals program. We found that ineligible households may be receiving free school-meals benefits by submitting applications that falsely state that a household member is categorically eligible for the program due to participating in certain public-assistance programs—such as SNAP or TANF—or meeting an approved designation—such as foster child or homeless. Of the 25 household applications we reviewed, 6 were approved for free school- meals benefits based on categorical eligibility, and 3 of these were potentially ineligible for the benefit.Specifically, we found the following: Figure 3 illustrates these results. One household applied for benefits during the 2010-2011 school year—providing a public-assistance benefit number—and was approved for free-meal benefits. However, when we verified the information with the state, we learned that the number was for medical-assistance benefits—a program that is not included in categorical eligibility for the school-meals programs. When interviewed, the parent said that he could not remember if the benefits they received were SNAP or medical assistance. On the basis of our review of payroll records, this household’s annualized income of at least $59,000 during 2010 would not have qualified the household for free or reduced-price-meal benefits. This household applied for school-meals benefits during the 2011-2012 and 2012-2013 school years, again indicating the same public-assistance benefit number— and was approved for free-meal benefits. Another household applied for benefits during the 2010-2011 school year—providing a public-assistance benefit number—and was approved for free-meal benefits. When interviewed, the parent said that the household received SNAP benefits. However, when we verified the information with the state, officials told us the household was not receiving public-assistance benefits at the time of application. In a 2010-2011 school year application, one household indicated that the student was a foster child; however, when we interviewed the applicant, she told us that she has never had foster children. This household was not eligible for free meals, but may have been eligible for reduced-price meals.household was directly certified for free-meal benefits during the 2011-2012 and 2012-2013 school years. A school-district official told us that this Because applications that indicate categorical eligibility are generally not subject to standard verification, these ineligible households would likely not be identified unless they were selected for for-cause verification or as part of the administrative review process, even though they contained inaccurate information. These cases underscore the potential benefits that could be realized by verifying beneficiaries with categorical eligibility. We will refer these potentially ineligible households to USDA and their school district for appropriate action as warranted. Furthermore, the administrative review report for one district we reviewed noted that categorical-eligibility determinations were not always correct, including migrant, homeless, runaway, Head Start, and Even Start programs. USDA’s eligibility manual states that school districts should be aware of the characteristics of a valid SNAP or TANF number and are allowed to verify this information with the appropriate agency. However, these numbers can vary in terms of length. An official from one state told us that because the length of SNAP and TANF numbers varies, it is difficult to determine whether a number is valid simply by looking at it. A household could also provide an old case number—which appears valid—and the school district would not know that the household is not receiving public- assistance benefits unless the school district verifies the information with the appropriate state agency. Standards for Internal Control in the Federal Government state that control activities should be effective and efficient in enforcing program requirements and help in detecting errors and fraud. Since applications that indicate categorical eligibility are generally not selected for standard verification, there is limited oversight over these beneficiaries. Individuals with knowledge of the program-eligibility guidelines could indicate categorical eligibility to avoid scrutiny, as this would prevent the application from ever being verified unless the school-district official certifying the household had specific knowledge that the information was not accurate. Verifying a sample of applications that indicate categorical eligibility could assist in identifying ineligible households that are receiving benefits and help improve program integrity. For example, USDA could have school districts select a sample of applications indicating categorical eligibility and verify the information with the appropriate agency. With the increase in the number of school districts that directly certify SNAP- participant children, school districts may already have mechanisms to match students with SNAP data provided by the state agency. Alternatively, USDA could consider having the state agency perform this verification as part of its periodic administrative review of the school district. OMB’s designation of the school-meals program as a “high-error” program with significant estimated improper payments makes it important that internal controls and oversight for the school-meals programs be strengthened while simultaneously ensuring that students who qualify for benefits are not adversely affected. USDA has taken steps to strengthen controls and to increase access to eligible individuals by working with Congress, school districts, and other public-assistance programs to find new ways to provide benefits to those requiring assistance. However, the cases we identified in which households received school-meals benefits that they were not eligible for highlight the deficiencies with current controls and the need for additional corrective actions. Evaluating the data collected on completed for-cause verifications for the 2013-2014 school year could help USDA determine whether specific data on for- cause verification outcomes should be reported separately from standard verification results and whether additional guidance for conducting for- cause verification—including criteria and examples of possible indicators of questionable or ineligible applications—would be beneficial. Moreover, a cost-effective mechanism to electronically verify applicant information with income or other data sources such as public-sector wage records could help enhance the current verification process and strengthen program integrity. While challenges may exist in verifying beneficiary income through computer matching, 9 years have passed since USDA conducted a pilot to determine the feasibility of electronic verification. The cost of the school-meals programs, continued high improper payments, and advances in technology support the need to revisit the feasibility of conducting computer matching in the school-meals programs to enhance current verification efforts. If appropriate, developing a legislative proposal to expand the statutorily defined verification process to include additional independent electronic verification for a sample of all school-meals applications could help USDA identify and prevent ineligible applicants from participating in the school-meals program. In addition, verifying a sample of applications that indicate categorical eligibility could assist in identifying ineligible households that are receiving benefits and help improve program integrity. To improve integrity and oversight of the school-meals programs, we recommend that the Secretary of Agriculture take the following four actions: Evaluate the data collected on for-cause verifications for the 2013- 2014 school year to determine if for-cause verification outcomes should be reported separately, and if appropriate, develop and disseminate additional guidance for conducting for-cause verification that includes criteria for identifying possible indicators of questionable or ineligible applications. Develop and assess a pilot program to explore the feasibility of computer matching school meal participants with other sources of household income, such as state income databases, to identify potentially ineligible households—those with income exceeding program-eligibility thresholds—for verification. If the pilot program shows promise in identifying ineligible households, develop a legislative proposal to expand the statutorily-defined verification process to include this independent electronic verification for a sample of all school-meals applications. Explore the feasibility of verifying the eligibility of a sample of applications that indicate categorical eligibility for program benefits and are thus not subject to standard verification. We provided a draft of this report to USDA for its review and comment. Written comments from the Administrator for FNS are reprinted in appendix VI. In its written comments, FNS indicated that it has long recognized the importance of addressing improper payments and program integrity problems to meet the mission of its programs and that it will carefully consider our specific recommendations as it moves forward in its efforts to improve integrity in the school-meals programs. The letter also describes several steps FNS is taking to strengthen program integrity, many of which are highlighted in this report. An e-mail dated May 2, 2014, from the FNS GAO Liaison/Coordinator stated that FNS generally agreed with our recommendations. FNS also provided technical comments, which we incorporated as appropriate. In its technical comments, FNS outlined potential challenges in implementing computer matching of school meal participants with data on other sources of income, such as challenges in working with state data or with incomplete Social Security numbers, and the potential costs of verifying income data and following up with households. We noted potential challenges in our report and acknowledge them in our recommendation that USDA develop and assess a pilot program to explore the feasibility of this process. We believe that the continued high improper payments rate and advances in technology support the need to conduct this pilot and, if it shows promise, to develop a legislative proposal to expand its use. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Secretary of Agriculture, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-6722 or lords@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. This report assesses (1) what steps, if any, has the U.S. Department of Agriculture (USDA) taken to help identify and prevent ineligible beneficiaries from receiving benefits in school-meals programs, and (2) what opportunities, if any, exist to strengthen USDA’s oversight of the school-meals programs? We also report case-study examples of households that may have improperly received program benefits. Because of limited salary and income data available for all U.S. households, our case-study examples are limited to civilian executive- branch employees and United States Postal Service (USPS) employees. The Washington, D.C and Dallas, Texas metropolitan regions ranked 1st and 18th, respectively, among the 50 metropolitan regions with the largest executive-branch federal employees during fiscal year 2012. The Washington, D.C. metropolitan region includes Washington, D.C., Maryland, and Virginia. districts—1 located in the Dallas, Texas metropolitan region and 2 located in the Washington, D.C., metropolitan region—because the data were not reliable for our purposes. During the 2010-2011 school year there were 57 school districts in Washington, D.C.; 49 in Maryland; 1,260 in Texas; and 154 in Virginia for a total of 1,520. This selection is not representative of all states, school districts, or school-meal participants. We assessed controls related to the identification and prevention of ineligible beneficiaries in accordance with internal control standards. To further identify opportunities, if any, that exist to strengthen USDA’s oversight of the school-meals programs we tested controls that are designed to identify and prevent ineligible school-meals beneficiaries. To do this, we selected a nongeneralizable sample of 48 households participating in the National School Lunch Program (NSLP) for further review and investigation. To select the sample, we matched school-meals eligibility data for the 2010-2011 school year from the 25 school districts to civilian executive-branch federal-employee payroll data. The 2010- 2011 school year was the last year in which the school-meals applications requested that the adult applicant provide his or her complete Social Security number. While we do not expect federal employees to be any more or less likely to commit fraud than employees in other sectors, we completed case-study work based on the availability of centralized salary, address, Social Security number, and employment data for federal employees—these data were used to identify participants in NSLP, regardless of income. The results of our work cannot be generalized to all participants because it does not include private-sector employees. Additional information comparing federal and private-sector employee wages can be found in appendix V. We began by examining databases containing students deemed eligible for free or reduced-price school meals for the 2010-2011 school year from the 25 school districts. These data generally contained personally identifiable information for the child and an adult household member, as well as household income and size. The data also contained information about whether a household was directly certified into the program or approved through a household application. We also obtained civilian federal-employee payroll data for approximately 2.5 million individuals from five federal-payroll processors. These data contained personally identifiable information for the federal employee, as well as wages by pay period for some or all of calendar year 2010. We used federal-employee payroll data to develop case studies due to the unavailability of other data sources containing salary information for nonfederal employees. To assess the reliability of the school-meals eligibility and payroll data, we reviewed relevant documentation, interviewed knowledgeable agency officials, and examined the data for obvious errors and inconsistencies. We concluded that the school-meals eligibility data and payroll data were sufficiently reliable for purposes of this report. Next, we narrowed the civilian federal-employee payroll data to those with income during the July 2010 to December 2010 period—to coincide with the start of the school year when most school-meals eligibility determinations are made. We matched the school district and federal payroll data using the Social Security number of the adult household member, an address key composed of the address and zip code, and name fields, to the extent they were available. Our matches included households that, based on income, appeared both eligible and ineligible to participate in the school-meals programs. A household member earning income does not preclude children in the household from being eligible for school-meals benefits. From our matches, we generated randomly sorted lists of free and reduced-price school-meals participants who submitted an application and randomly sorted lists of students who were directly certified for free school meals in each of the 25 school districts. We then randomly selected up to two households in each of the 25 school districts for an in-depth review, for a total of 48 cases. Specifically, for each of the school districts, we reviewed one household that submitted an application that was used for benefit determination (25 cases), as well as one household that was directly certified (23 cases). Two of the 25 school districts did not have any directly certified students who matched with the payroll data. We applied a minimum threshold of $6,000 to the amount of federal salary earned during July 2010 to December 2010 in order to identify active employees for our sample. In the event an adult applicant was deceased or could not be located, we selected the next participant from the randomly sorted list. The specific findings from the selected cases cannot be generalized to other, or all, school-meals beneficiaries or federal-employee households that received school-meals benefits. Because our data were limited to federal-employee households in 25 school districts that were selected on a nonrandom basis, the results of our cases cannot be generalized to a larger population of school-meals participants or to the entire federal workforce. Once we identified the sample, we contacted the school districts and the states in our sample to obtain supporting documentation. For the 25 households that submitted a school-meals application, we requested and reviewed the available applications from the 2010-2011 school year to see what the applicant listed as his or her household income and household size. We used the school-meals income-eligibility guidelines to determine whether school districts correctly determined eligibility based upon the information stated on the application. We also reviewed school- meals applications from the 2011-2012 and 2012-2013 school years, if submitted. For the 23 directly certified households, we obtained and reviewed the public-assistance application associated with the household from state agencies in the District of Columbia, Maryland, Texas, and Virginia to see what the applicant listed as the household income and composition. We then reviewed the payroll records of the applicant or other household member to obtain information on their actual minimum income during the period the application was signed and to determine whether the federal employee’s income stated on the school-meals or public-assistance application was accurate. If the applicant’s income, along with the income of other household members listed on the application, exceeded the eligibility guideline based on the number of household members stated on the application, we considered these households to be potentially ineligible for school-meals benefits. To conduct our investigative work, we interviewed individuals from the 48 households in our nongeneralizable sample—23 households that were directly certified and 25 that applied for benefits. We interviewed these individuals to determine whether the information entered on the applications was accurate, to confirm their income, and to determine the composition of their households. Investigators also conducted a review of the associated payroll records and school-meals application or public- assistance application to inform the interviews. We conducted this performance audit from February 2012 to May 2014 in accordance with generally accepted government auditing standards.Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We conducted our related investigative work from June 2013 to May 2014 in accordance with standards prescribed by the Council of the Inspectors General on Integrity and Efficiency. Schools and households that enroll students in free or reduced-price school meals may be eligible for additional federal and state benefits. For example, the U.S. Department of Education, through Title I of the Elementary and Secondary Education Act, provided approximately $13.8 billion in funds to schools with high concentrations of low-income families during fiscal year 2013. The distribution of federal Title I funds within schools and school districts can be based, in part, on the number of students eligible for free and reduced-price meals. In addition, separate state funding for schools can also be tied to the percentage of students eligible for free or reduced-price school meals. Further, the Universal Service Program for Schools and Libraries, also known as the E-rate program—created by the Telecommunications Act of 1996—provides schools with discounts on eligible telecommunications services, Internet access, and internal connections (such as network wiring). Discounts can range from 20 to 90 percent, and the primary measure for determining the discount is the percentage of students eligible under the National School Lunch Program (NSLP) for free or reduced-price meals. Households can also receive additional benefits by participating in the school-meals programs. For example, the Lifeline Program, administered by the Universal Service Administration Company on behalf of the Federal Communications Commission, provides phone service discounts for households that participate in the NSLP or other qualifying assistance programs. We reviewed guidance from one school district that provides other benefits to households qualifying for free or reduced-price meals, including textbook assistance, and waiving college application fees and athletic fees. According to a January 2012 Congressional Budget Office (CBO) Report, in 2010, 1.7 percent of the U.S. workforce was made up of federal civilian employees—approximately 2.3 million compared to 111 million that were employed by the private sector and 20 million employed by state and local governments. Another 800,000 are employed by government enterprises that typically pay for employee compensation through the sale of services rather than through tax revenue, the United States Postal Service (USPS) being the largest such employer. Further, federal employees can be both part-time and intermittent, such as census enumerators, whose jobs last from 2 to 8 weeks. According to CBO’s study, employees of the federal government have varying levels of educational attainment. For example, about 50 percent of federal employees have either a high-school diploma or less or, at most, some college, compared to about 70 percent of the private-sector workforce. In the federal government, about 50 percent of employees have a bachelor’s degree or higher education attainment, while in the private sector about 30 percent of the workforce has comparable education. The wages of federal versus private-sector employees vary when considering level of education attained. In addition, CBO found that federal civilian workers with no more than a high-school education earned about 21 percent more, on average, than similar workers in the private sector, while those with some college earned 15 percent more, on average than similar workers in the private sector. Employees whose highest level of education is a bachelor’s degree earned roughly the same hourly wage, on average, in the federal government as in the private sector. Federal workers with a doctorate or professional degree earned 23 percent less per hour, on average than similar workers in the private sector. While this CBO report does provide a point of comparison between civilian federal workers and the private sector, the data analyzed do not mirror the federal-employee population used in this report. For example, the CBO report does not include workers in government enterprises, including USPS, seasonal, or part-time civilian federal employees, while our analysis does. Therefore, while the results of the CBO report are presented for informational purposes, they should not be used to draw conclusions about civilian federal-employee and USPS pay used in our analysis. In addition to the contact named above, Heather Dunahoo (Assistant Director), John W. Cooney, Heather Cowles, Ranya Elias, Erika Lang, Kathryn Larin, Maria McMullen, Dan Meyer, Linda Miller, Sandra Moore, Robert Ridley, and Daniel Silva made key contributions to this report.
In fiscal year 2012, over 31.6 million children participated in USDA's National School Lunch Program (NSLP) at a cost of about $11.6 billion. In fiscal year 2013, USDA estimated NSLP certification errors of more than 8 percent, or $996 million. GAO was asked to review possible beneficiary fraud within the program. This report assesses (1) steps taken to help identify and prevent ineligible beneficiaries from receiving benefits in school-meal programs and (2) what opportunities exist to strengthen USDA's oversight of the school-meals programs. GAO reviewed NSLP policies, interviewed program officials, and randomly selected a nongeneralizable sample that included 25 of 7.7 million approved household applications from 25 of 1,520 school districts in the Dallas, Texas, and Washington, D.C., regions. GAO performed limited eligibility testing using civilian federal-employee payroll data from 2010 through 2013 due to the unavailability of other data sources containing nonfederal employee income. GAO also conducted interviews with households. Ineligible households were referred to the Inspector General. The U.S. Department of Agriculture (USDA) has taken several steps to implement or enhance controls to identify and prevent ineligible beneficiaries from receiving school-meals benefits. For example: USDA worked with Congress to develop legislation to automatically enroll students who receive Supplemental Nutritional Assistance Program benefits for free school meals; this program has a more-detailed certification process than the school-meals program. Starting in the 2013-2014 school year, USDA increased the frequency with which state agencies complete administrative reviews of school districts from every 5 years to every 3 years. As part of this process, state agencies review applications to determine if eligibility determinations were correctly made. In 2012, USDA issued guidance to clarify that school districts have the authority to verify approved applications for school-district employees when information indicates that the applicant misrepresented his or her income. GAO identified opportunities to strengthen oversight of the school-meals programs while ensuring legitimate access, such as the following: Exploring the feasibility of computer matching external income data, such as state payroll data, with participant information to identify households whose income exceeds eligibility thresholds for verification could help identify ineligible participants. Currently, school districts verify a sample of approved applications deemed “error-prone”—statutorily defined as those with reported income within $1,200 of the annual income levels specified in program- eligibility guidelines—to determine whether the household is receiving the correct level of benefits (referred to as standard verification in this report). In a nongeneralizable review of 25 approved applications, GAO found that 9 of 19 households that self-reported household income and size information were ineligible and only 2 could have been subject to standard verification. Verifying a sample of categorically eligible applications could help identify ineligible households. Currently, school-meal applicants who indicate categorical eligibility (by participating in certain public-assistance programs or meeting an approved designation, such as foster children) are eligible for free meals and are generally not subject to standard verification. In a nongeneralizable review of 25 approved applications, 6 households indicated categorical eligibility, 2 of which were ineligible, and another may have been eligible for reduced-price meals instead of free school meals. Results of GAO's Analysis of a Nongeneralizable Sample of 25 Approved Household Applications from the 2010-2011 School Year Among other things, GAO recommends that the Secretary of Agriculture develop a pilot program to explore the feasibility of using computer matching to identify households with income that exceeds program-eligibility thresholds for verification, and explore the feasibility of verifying a sample of categorically eligible households. USDA generally agreed with the recommendations.
In May 2003, the Office of Force Transformation began funding small experimental satellites to enhance the responsiveness to the warfighter and to create a new business model for developing and employing space systems. As we have reported over the past two decades, DOD’s space portfolio has been dominated by larger space system acquisitions, which have taken longer, cost more, and delivered fewer quantities and capabilities than planned. The ORS initiative is a considerable departure from DOD’s large space acquisition approach. The initiative aims to quickly deliver low cost, short-term tactical capabilities to address unmet needs of the warfighter. Unlike traditional large satellite programs, the ORS initiative is intended to address only a small number of unmet tactical needs—one or two—with each delivery of capabilities. It is not designed to replace current satellite capabilities or major space programs in development. Also, the initiative potentially aims to identify and facilitate ways to reduce the time and cost for all future space development efforts. As we have previously reported, managing requirements so that their development is matched with resources offers an opportunity to mature technologies in the science and technology environment—a best acquisition practice. We also have reported that the ORS initiative could provide opportunities for small companies—who often have a high potential to introduce novel solutions and innovations into space acquisitions—to compete for DOD contracts. Consolidations within the defense industrial base for space programs have made it difficult for such companies to compete. ORS could broaden the defense industrial base and thereby promote competition and innovation. Since we last reported on DOD’s ORS efforts in 2006, the department has taken several steps toward establishing a program management structure for ORS and executing research and development efforts. Despite this progress, it is too early to determine the overall success of these efforts because most are still in their initial phases. Congress directed that DOD submit a report that sets forth a plan for the quick acquisition of low cost space capabilities and establish a Joint ORS Office to coordinate and manage the ORS initiative. In the first half of 2007, DOD delivered an ORS plan to Congress and established a Joint ORS Office. DOD created the Joint ORS Office to coordinate and manage specific science and technology efforts to fulfill joint military operational requirements for on-demand space support and reconstitution. In addition, DOD is working with other government agencies to staff the office, developing an implementation plan, and establishing a process for determining which existing requirements for short-term tactical capabilities the office should pursue. Responsiveness is an attribute desired by the entire space community, including the National Aeronautics and Space Administration and the military service laboratories. Most of the efforts under the ORS initiative are being executed by science and technology organizations and other DOD agencies. The office will be responsible for coordinating, planning, acquiring, and transitioning those efforts. Its work is to be guided by an executive committee, comprised of senior officials from DOD, the military services, the intelligence community, and other government agencies. Most requirements for needed short-term tactical capabilities are expected to come through the U.S. Strategic Command. To respond to unmet warfighter needs, ORS requirements will be based on existing validated requirements. Table 1 summarizes the status of some of DOD’s efforts related to the management structure. DOD is continuing to make progress in developing TacSats—its small experimental satellite projects. In addition, DOD is funding research efforts by industry to facilitate the development of future capabilities and is working with industry and academia to develop standards for building satellite components. Finally, DOD is working to improve the capabilities of existing small launch vehicles and providing some funding for future launch vehicles. The TacSat experiments aim to quickly provide the warfighter with a capability that meets an identified need within available resources—time, funding, and technology. Limiting the TacSats’ scope allows DOD to trade off higher reliability and performance for speed, responsiveness, convenience, and customization. Once each TacSat satellite is launched, DOD plans to test its level of utility to the warfighter in theater. If military utility is established, DOD will assess the acquisition plan required to procure and launch numerous TacSats—forming constellations—to provide wider coverage over a specific theater. As a result, each satellite’s capability does not need to be as complex as that of DOD’s larger satellites and does not carry with it the heightened consequence of failure as if each satellite alone were providing total coverage. DOD currently has five TacSat experiments in different stages of development (see table 2). In addition, DOD is sponsoring the development of new capabilities provided mostly by the small satellite industry. These efforts include the ORS Payload Technology Initiative, which awarded 15 contracts to satellite industry contractors for payload technology concepts that may be developed in the future. The Air Force has been funding additional research conducted by small technology companies that could provide ORS capabilities, such as faster ways of designing satellites, and identifying the types and characteristics of components based on mission requirements. DOD is also working to establish standards for the “bus”—the platform that provides power, attitude, temperature control, and other support to the satellite in space. Establishing interface standards for bus development would allow DOD to create a “plug and play” approach to building satellites—similar to the way personal computers are built. According to DOD officials, interface standards would allow the development of modular or common components and would facilitate building satellites—both small and large—more quickly and at a lower cost. DOD’s service laboratories, industry, and academia have made significant progress to develop satellite bus standards. The service labs expect to test some standardized components on the TacSat 3 bus and system standards on the TacSat 4 bus. Table 3 provides a description of the bus standardization efforts and their status. To get new tactical space capabilities to the warfighter sooner, DOD must secure a small, low cost launch vehicle on demand. Current alternatives include Minotaur launch vehicles, ranging in cost from about $21 million to $28 million, and an Evolved Expendable Launch Vehicle—DOD’s primary satellite launch vehicles—at an average cost of roughly $65 million (for medium and intermediate launchers). DOD is looking to small launch vehicles, unlike current systems, that could be launched in days, if not hours, and whose cost would better match the small budgets of experiments. Both DOD and private industry are working to develop small, low cost, on-demand launch vehicles. Notably, DOD expects the Defense Advanced Research Projects Agency’s (DARPA) FALCON launch program to flight-test hypersonic technologies and be capable of launching small satellites such as TacSats. In addition to securing low cost launch vehicles, DOD plans to acquire a more responsive, reliable, and affordable launch tracking system to complement the existing launch infrastructure. Table 4 describes DOD’s efforts to develop a launch infrastructure and their status. DOD faces several challenges in succeeding in its ORS efforts. With relatively modest resources, the Joint ORS Office must quickly respond to the warfighter’s urgent needs, including gaps in capabilities, as well as continue its longer-term research and development efforts that are necessary to help reduce the cost and time of future space acquisitions. As the office negotiates these priorities, it will need to coordinate its efforts with a broad array of programs and agencies in the science and technology, acquisition, and operational communities. Historically it has been difficult to transition programs initiated in the science and technology environment to the acquisition and operational environment. At this time, DOD lacks tools which would help the program office navigate within this environment—primarily, a plan that lays out how the office will direct its investments to meet current operational needs while at the same time pursuing innovative approaches and new technologies. The Joint ORS Office has a budget totaling about $646 million for fiscal years 2008 through 2013 and with no more than 20 government staff. These resources are relatively modest when compared with the resources provided major space programs. For example, the ORS fiscal year 2008 budget represents less than 12 percent of the budget of the Transformational Satellite Communications System program which is in the concept development phase, and staffing is about a quarter of that program’s staff. While the Joint ORS Office’s responsibilities are not the same as those of large, complex acquisition programs, it is expected to address urgent tactical needs that have not been met by the larger space programs. At this time, for example, the office has been asked to develop a solution to meet current communications shortfalls that cannot be met by the current Ultra High Frequency Follow-On satellite system. And, while the office has not yet been asked, officials have told us that ORS could potentially satisfy a gap in early missile warning capabilities because of delays in the Space Based Infrared Systems program, as well as gaps in communications and navigation capabilities. Taking on any one of these efforts will be challenging for ORS as there are constraints in available technologies, time, money, and other resources that can be used to fill capability gaps. At the same time, the Joint ORS Office will be pressured to continue to sponsor longer term research and development efforts. According to the Air Force Research Laboratory, the average cost of a small satellite is about $87 million. This is substantially higher than the target acquisition cost of about $40 million for an integrated ORS satellite in the 2007 National Defense Authorization Act. In addition, concerns are being expressed that not enough funding and support are being devoted to acquiring low cost launch capabilities. Some government and industry officials believe that achieving such capabilities is a linchpin to reducing satellite development costs in the future. The current alternatives for launching ORS satellites—an Evolved Expendable Launch Vehicle and Minotaur launch vehicles—do not meet DOD’s low cost goal. DARPA expects its responsive launch capabilities, currently in development, will total about $5 million to produce—a significantly lower cost than that of current capabilities. However, in order to achieve the lower cost launch capability, DOD will have to continue to fund research beyond the $15.6 million already spent on advanced technology development, facilities, test- range and mission support, and program office support. To execute both its short- and long-term efforts, the Joint ORS Office will also need to gain cooperation and consensus from a diverse array of officials and organizations. These include science and technology organizations, the acquisition community, the U.S. Strategic Command, the intelligence community, and industry. We have previously reported on difficulties DOD has encountered in bringing these organizations together, particularly when it comes to setting requirements for new acquisitions and transitioning technologies from the science and technology community to acquisition programs. As a new and relatively small organization, the Joint ORS Office may well find it does not have the clout to gain cooperation and consensus on what short- and long-term projects should get the highest priority. Despite the significant expectations placed on the Joint ORS Office and the challenges it faces, DOD does not have an investment plan to guide its ORS decisions. DOD has begun to develop an ORS strategy that is to identify the investments needed to achieve future capabilities. However, the strategy is not intended to become a formalized investment plan that would (1) help DOD identify how to achieve these capabilities, (2) prioritize funding, and (3) identify and implement mechanisms to enforce the plan. At the same time, there are other science and technology projects in DOD’s overall space portfolio competing for the same resources, including those focused on discovering and developing technologies and materials that could enhance U.S. superiority in space. Further, as DOD’s major space acquisition programs continue to experience cost growth and schedule delays, DOD could be pressured to divert funds from ORS. We have previously recommended that DOD prioritize investments for both its acquisitions and science and technology projects—the ORS plan could be seamlessly woven into an overall DOD investment plan for space. However, DOD has yet to develop this overall investment plan. Providing the warfighter with needed space capabilities in a fiscally constrained and rapidly changing technological environment is a daunting task. ORS provides DOD with a unique opportunity to work outside the typical acquisition channels to more quickly and less expensively deliver these capabilities. However, even at lower costs, pressure on ORS funding will come in DOD’s competition for its resources. As DOD moves forward, decisions on using constrained resources to meet competing demand will need to be made and reevaluated on a continuing basis. Until DOD develops an investment plan, it will risk forgoing an opportunity to get continuing success out of the ORS initiative. To better ensure that DOD meets the ORS initiative’s goal, we recommend that the Secretary of the Air Force develop an investment plan to guide the Joint ORS Office as it works to meet urgent needs and develops a technological foundation to meet future needs. The plan should be approved by the stakeholders and identify how to achieve future capabilities, establish funding priorities, and identify and implement mechanisms to ensure progress is being achieved. We provided a draft of this report to DOD for review and comment. DOD concurred with our findings and our recommendation but clarified that the Secretary of the Air Force, specifically the Executive Agent for Space, would be responsible for developing an investment plan since the Under Secretary of the Air Force position is vacant. Full comments can be found in appendix I. To assess DOD’s progress to date in implementing its ORS goal and addressing associated challenges, we interviewed and reviewed documents from officials in Washington, D.C., at the Office of the Deputy Under Secretary of Defense for Advanced Systems and Concepts; National Security Space Office; Office of the Director of Defense Research and Engineering; Office of the Director of Program Analysis and Evaluation; Office of the Joint Chiefs of Staff; the U.S. Naval Research Laboratory; and the Office of the Assistant Secretary of the Navy for Research, Development and Acquisition. We also interviewed and reviewed documents from officials in Virginia at the Office of the Assistant Secretary of Defense for Networks Information and Integration; Office of the Under Secretary of the Air Force; Defense Advanced Research Project Agency; and U.S. Army Space and Missile Defense Command. In addition, we interviewed and reviewed documents from officials at the Navy Blossom Point Satellite Tracking Facility in Maryland; Air Force Space Command, Peterson Air Force Base, Colorado; Space and Missile Systems Center, Los Angeles Air Force Base, California; the U.S. Strategic Command, Offutt Air Force Base, Nebraska; and the Air Force Research Laboratory and Joint Operationally Responsive Space Office, Kirtland Air Force Base, New Mexico. We also interviewed officials from the National Aeronautics and Space Administration, Washington, D.C., and industry representatives involved in developing small satellites and commercial launch vehicles. We reviewed and analyzed the documents that we received. We will send copies of the letter to the Department of Defense and other interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-4859 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Principal contributors to this report were Art Gallegos, Assistant Director; Maria Durant; Jean Harker; Arturo Holguin; and Karen Sloan. In addition, key contributors to the report include Maria Durant, Art Gallegos, Jean Harker, Arturo Holguin, and Karen Sloan.
The Department of Defense (DOD) invests heavily in space assets to provide the warfighter with intelligence, navigation, and other information critical to conducting military operations. In fiscal year 2008 alone, DOD expects to spend over $22 billion dollars on space systems. Despite this investment, senior military commanders have reported shortfalls in tactical space capabilities in each recent major conflict over the past decade. To provide short-term tactical capabilities as well as identify and implement long-term solutions to developing low cost satellites, DOD initiated operationally responsive space (ORS). Following a 2006 GAO review of ORS, the Congress directed DOD to submit a report that sets forth a plan for providing quick acquisition of low cost space capabilities. This report focuses on the status of DOD's progress in responding to the Congress and is based on GAO's review and analyses of ORS documentation and interviews with DOD and industry officials. Since GAO last reported on DOD's ORS efforts in 2006, the department has taken several steps toward establishing a program management structure for ORS and executing research and development efforts. On the programmatic side, DOD provided Congress with a plan that lays out an organizational structure and defines the responsibilities of the newly created Joint ORS Office, and describes an approach for satisfying warfighters' needs. DOD has also begun staffing the office. On the research and development side, DOD has launched one of its TacSat satellites--small experimental satellites intended to quickly provide a capability that meets an identified need within available resources--and has begun developing several others. It has also made progress in developing interface standards for satellite buses--the platform that provides power, altitude, temperature control, and other support to the satellite in space--and continued its sponsorship of efforts aimed at acquiring low cost launch vehicles. Despite this progress, it is too early to determine the overall success of these efforts because most are still in their initial phases. Achieving success in ORS will be challenging. With relatively modest resources, the Joint ORS Office must quickly respond to the warfighter's urgent needs, while continuing research and development efforts that are necessary to help reduce the cost and time of future space acquisitions. As it negotiates these priorities, the office will need to coordinate its efforts with a broad array of programs and agencies in the science and technology, acquisition, and operational communities. Historically, it has been difficult to transition programs from the science and technology environment to the acquisition and operational environment. At this time, DOD lacks a plan that lays out how it will direct its investments to meet current operational needs while pursuing innovative approaches and new technologies.
The federal judiciary consists of the Supreme Court, 12 regional circuit courts of appeals, 94 district courts, 91 bankruptcy courts, as well as courts of special jurisdiction including the Court of Appeals for the Federal Circuit, the Court of International Trade, and the Court of Federal Claims. In each district, defender services programs and probation and pretrial services offices assist the judiciary in the fair administration of justice and protecting the community. Governance of the judiciary is substantially decentralized, and individual courts have discretion to organize operations, develop procedures, and make budgetary decisions within allotted funds to suit local needs. The Judicial Conference of the United States, presided over by the Chief Justice of the United States, is the policy-making body for the federal judiciary and sets national policies and takes positions on legislation on all aspects of federal judicial administration. Membership of the Judicial Conference comprises the chief judge of each judicial circuit, the Chief Judge of the Court of International Trade, and a district judge from each regional judicial circuit. The Judicial Conference operates through a network of committees created to address and advise on a wide variety of subjects such as information technology, personnel, probation and pretrial services, space and facilities, security, judicial salaries and benefits, budget, defender services, court administration, and rules of practice and procedure. AOUSC provides a range of administrative and other support services to the Judicial Conference, the courts, and federal defender organizations. In addition to AOUSC supporting the judiciary, the Federal Judicial Center (FJC) is responsible for conducting research on federal judicial operations and procedures and conducting and promoting training for federal judges, court employees, and others. See figure 1 for an overview of the judicial entities discussed in this report. The federal judiciary works with executive branch agencies to administer justice in federal court cases. For example, within the Department of Justice (DOJ), United States Attorneys serve as the nation’s principal litigators in the prosecution of criminal cases brought by the federal government and the prosecution and defense of civil cases in which the United States is a party, among other duties. Also, the United States Marshals Service, a component of DOJ, has primary physical security responsibility for federal courthouses. Among other things, the Marshals Service’s responsibilities include managing court security officers and security systems and equipment, including X-ray machines, surveillance cameras, duress alarms, and judicial chambers’ entry control devices. In addition, as the federal government’s landlord, the General Services Administration (GSA) is responsible for, among other things, designing, building, and maintaining its portfolio of approximately 9,000 federally owned or leased buildings and courthouses. According to AOUSC, as of June 30, 2015, the judiciary rented 437 courthouse buildings through GSA and rented space (including courthouse buildings and space such as probation services offices and chambers not located in courthouses) in a total of 740 GSA buildings. In fiscal year 2014, the judiciary’s rent payments to GSA totaled over $1 billion. The operations of the federal judiciary are funded by a combination of annual appropriations and mandatory spending, including offsetting collections. For fiscal year 2014, the judiciary’s enacted appropriations totaled about $7.03 billion, with offsetting collections of about $234 million, resulting in approximately $7.3 billion in new budgetary resources. The judiciary uses accounts to obligate, account for, and manage its enacted appropriations each fiscal year and budget object classifications as a framework for categorizing obligations. The judiciary obligated about $7.1 billion in fiscal year 2014. The judiciary’s operations are primarily funded through 12 appropriation accounts, including the Salaries and Expenses account for the Courts of Appeals, District Courts, and Other Judicial Services; the Defender Services account; and the Court Security account, among others. As shown in figure 2, almost 94 percent of the fiscal year 2014 obligations of $7.1 billion made by the judiciary were from the Salaries and Expenses, Defender Services, and Court Security accounts. The Salaries and Expenses account includes the costs associated with the salaries, benefits, and other operating expenses of the judges and supporting personnel for the U.S. courts of appeals, district courts, and probation and pretrial services offices. The Defender Services account supports the appointment of counsel and other services necessary to represent defendants financially unable to retain counsel in federal criminal proceedings and to provide for the continuing education and training for those who represent these defendants. The Court Security account funds the necessary expenses incident to the provision of protective guard services and the procurement, installation, and maintenance of security systems and equipment that protect U.S. courthouses and other facilities housing federal court operations, not otherwise provided for by other accounts. In addition, the judiciary uses budget object classifications, which are categories used in budget preparation to classify obligations by the items or services purchased by the federal government (e.g., personnel compensation, contractual services). As shown in figure 3, about 56 percent of the $7.1 billion in obligations made by the judiciary in fiscal year 2014 were from the personnel and compensation object classification. In addition, the judiciary made about 35 percent of its fiscal year 2014 obligations from two contractual services subobject classifications—rental payments to GSA and others and other services. A brief description of each budget object classification follows the figure. Once appropriations are enacted, the judiciary develops annual financial (or spending) plans to balance requirements with available funds and allots funds to the courts and federal defender organizations for salaries, operations, and information technology, among other things. Under the judiciary’s budget decentralization policies, the courts and federal defender organizations can spend their allotted funds as needed— whether for staff, technology, or other needs. According to judiciary documents, if available funding for a fiscal year does not meet court needs, court managers have local authority to decide how to staff and support their offices within the allotted funds. For example, court managers may decide to take personnel actions (such as not filling vacancies, freezing promotions, instituting furloughs, and offering early retirement incentives and buyouts, among other actions); seek to identify and adopt efficiencies in work processes (such as sharing administrative staff); or shift funds among allotments for salary, operations, and information technology, among other actions. The absence of legislation to reduce the federal budget deficit by at least $1.2 trillion triggered the sequestration process in section 251A of the Balanced Budget and Emergency Deficit Control Act of 1985, as amended, and the President ordered the sequestration of budgetary resources on March 1, 2013. Following this order, OMB calculated sequestration based on the annualized funding level set by the continuing resolution that was currently in effect—or 5 percent for nondefense, nonexempt discretionary appropriations and 5.1 percent for nondefense, nonexempt direct, or mandatory, spending. Because these cuts were to be achieved during the 7 remaining months of the fiscal year, OMB estimated that the effective percentage reduction to fiscal year 2013 spending over that time period was approximately 9 percent for nondefense programs. The judiciary’s discretionary appropriations include Salaries and Expenses for the Courts of Appeals, District Courts, and Other Judicial Services (excluding judges’ salaries); Defender Services; Fees for Jurors and Commissioners; and Court Security, among others. Judiciary nonexempt mandatory spending includes judiciary filing fees and registry administration funds. Exempt from sequestration are mandatory spending for Article III judges’ salaries and benefits and judicial retirement funds, and certain other expenses. As shown in figure 4, sequestration reduced fiscal year 2013 funding for the Salaries and Expenses account by $239 million, Defender Services by almost $52 million, Court Security by $25 million, and Fees of Jurors and Commissioners by approximately $3 million, among other reductions. In October 2013, the federal government partially shut down for 16 days because of a lapse in appropriations for fiscal year 2014. At the start of the fiscal year, agencies without available funds were required to cease all operations (with a few exceptions, such as the protection of human life and property) and commence an orderly shutdown. The judiciary was able to continue operating during the fiscal year 2014 lapse in appropriations using available funds from fee collections and no-year appropriations. To help preserve its ability to fulfill its responsibility to render justice in a fair and timely manner and serve the public, the judiciary implemented a cost containment strategy in fiscal year 2005 and has implemented a range of cost containment initiatives for over 10 years. However, we found that the judiciary does not fully know how much it has saved as a result of these efforts because it has not developed a reliable method for estimating cost savings achieved by major cost containment initiative. For example, AOUSC officials stated that the judiciary has realized cost savings of nearly $1.5 billion relative to projected costs and attributed these savings primarily to the cost containment policies implemented, as well as other factors, since the adoption of its cost containment strategy. However, our analysis of available documentation and discussions with judiciary officials show that the reliability of the savings estimate is limited because the estimate does not include all savings realized, includes savings not attributable to cost containment initiatives, does not always include the costs associated with implementing initiatives, and was not always supported by adequate documentation. The judiciary has implemented numerous cost containment initiatives since developing a cost containment strategy in September 2004. Also, some cost containment initiatives were under way before the strategy was developed. In 2012, the judiciary reported that in fiscal years 2004 and 2005, it faced a budgetary challenge of unprecedented magnitude caused by lower than anticipated appropriations from Congress (in part because of across-the-board rescissions at the end of the appropriations process), a sudden and unexpected decline in filing fee collections, and significant levels of growth in certain portions of the judiciary’s budget (especially rent to GSA). According to the report, these factors combined to result in the loss of 1,350 onboard court staff, or approximately 6 percent of the workforce, in fiscal year 2004. In anticipation of future constrained budgets and to help mitigate potential further staff loss, in 2004 the Judicial Conference approved a Cost Containment Strategy for the Federal Judiciary: 2005 and Beyond that analyzed the judiciary’s major cost drivers and identified cost containment initiatives in six categories to slow the growth of costs. Appendix III contains examples of several cost containment initiatives the judiciary has implemented by category. According to AOUSC officials, the judiciary’s recent cost containment initiatives have focused on curtailing costs in the three major spending categories of space and facilities, judiciary personnel costs, and information technology. Table 1 shows examples of initiatives under way for these three categories and the year the judiciary began implementing the initiative. As shown in appendix III, the judiciary also has undertaken cost containment initiatives in the other five categories of law enforcement– related expenses, law book expenditures, defender services, court security, and fee adjustments. For example, the judiciary’s fiscal year 2016 congressional budget justification states that the judiciary has reduced costs by encouraging office consolidation in individual districts in order to save money and create efficiencies. Specifically, the judiciary encouraged individual court units within each district (i.e., district court, bankruptcy court, and probation and pretrial services) to work together to adopt shared administrative services plans. In fiscal year 2013, all 94 districts prepared plans that showed that many districts had either already begun to share administrative services or had committed to doing so. Furthermore, court and defender organization officials we interviewed identified efforts to achieve cost savings and efficiencies over the past 10 fiscal years. For example, court officials we interviewed in 9 of 12 circuit courts said that they have generally not hired new staff for positions vacated as a result of retirements and attrition. For example, one circuit court judge noted that when an employee resigns or retires from the court staff, managers will restructure the staff so that duties are reassigned to other staff. The judge estimated that in this court, staff levels have declined 11 percent from 2011 through 2014, resulting in a cumulative reduction in payroll expense of more than $4 million from fiscal years 2011 through 2014. Similarly, officials we interviewed in 3 of 4 district courts and 1 of 4 defender organizations said that they have generally not hired new staff when positions were vacated. According to the chair of the Budget Committee of the Judicial Conference, increases to the judiciary’s appropriations since the 2013 sequestration have allowed some courts to hire employees to fill some vacant positions in recent years. According to AOUSC officials, since the adoption of its cost containment strategy in September 2004, the judiciary has realized a cost savings of nearly $1.5 billion relative to projected costs. AOUSC officials attributed these savings primarily to the cost containment policies implemented, as well as other factors. Estimating cost savings is consistent with our conclusions from prior work on duplication, fragmentation, and overlap, that identifying and achieving cost savings should be a goal of all agencies. However, according to our analysis of available documentation and discussions with judiciary officials, the $1.5 billion cost savings estimate has limited reliability because the estimate does not include all savings realized, includes savings not attributable to cost containment initiatives, does not always include the costs associated with implementing initiatives, and was not always supported by adequate documentation. Figure 5 shows the nearly $1.5 billion cost savings estimate by major cost containment category, and details of our analysis of the estimate by category follow. Space and facilities cost savings estimate—The savings estimate for space and facilities initiatives has limited reliability for four reasons. First, AOUSC officials stated that the $538 million in space and facilities savings is the difference between a rent cost projection for fiscal year 2015 alone and the actual rent paid in fiscal year 2015 alone, which we confirmed. We found that the estimate does not include estimated rent cost savings for fiscal years 2006 through 2014. Second, the judiciary used a 3.1 percent annual rent inflation factor to help project its rent costs for fiscal years 2006 through 2015. However, the actual annual rent inflation ranged from 0.6 percent to 3.1 percent over this time frame, resulting in lower actual rent paid. As a result, according to AOUSC officials, $291 million of the $538 million savings estimate is the result of lower than anticipated rent inflation rather than savings from the judiciary’s cost containment efforts. Third, AOUSC officials stated that $247 million of the $538 million in estimated cost savings is the result of multiple initiatives undertaken by the judiciary to limit the growth in rent costs, but officials could not provide documentation to support this cost savings figure. Fourth, the space and facilities savings estimate did not always include the costs incurred by the judiciary to implement the cost containment initiatives, such as the upfront costs (e.g., for planning and design and construction or renovation) incurred for space reduction and Integrated Workplace Initiative projects. Salary and staff reduction cost savings estimates—According to our analysis of information provided by AOUSC, we determined that the methodology and data used to calculate the $785 million estimated savings resulting from salary and staff reductions are reliable. Specifically, AOUSC officials said that the salary reductions compare the cost of onboard payroll at a particular point in time with the previous year’s salary base to determine the savings in this category. With regard to the staffing reductions, AOUSC officials stated that they used the reduction in full-time equivalent staff and multiplied this reduction by the national average salary and benefits rate of judiciary staff to determine the savings resulting from staffing reductions. We assessed the reliability of the judiciary’s staffing and salary data and determined the data to be sufficiently reliable for the purpose of developing estimates of cost savings achieved from salary and staff reductions. Information technology cost savings estimate—The $89 million estimated savings resulting from information technology (IT) initiatives has limitations because AOUSC officials did not include all potential cost savings achieved or all costs to implement the initiatives. First, the judiciary provided documents that show approximately an additional $126.3 million in savings. Specifically, the judiciary did not include in its estimate the cost savings resulting from implementing technology-based solutions to manage and administer the jury function—i.e., select jurors, send pre-jury-selection paperwork to jurors, pay jurors for their service—($79 million); notify creditors, debtors, and other entities of bankruptcy proceedings ($43.9 million); and provide remote language interpretation for court proceedings ($3.4 million). Second, the IT savings estimate did not always include the costs incurred by the judiciary to implement the initiatives, so the amount of net cost savings resulting from these initiatives is unclear. For example, AOUSC officials were able to provide the costs incurred to implement the electronic jury management and bankruptcy notification systems, but did not provide information on the costs incurred to implement the other IT initiatives included in these estimates above, such as the costs incurred to consolidate and reduce the number of servers for several of its IT systems and the costs of contract telephone interpreters. Operating expense cost savings estimates—The $50 million estimated savings from operating expense reductions has limitations similar to those noted above for the IT cost savings estimate. Specifically, AOUSC officials provided documents that show an additional $42.7 million in savings resulting from law book reductions. AOUSC officials told us that $50 million in operating expense cost savings includes $25 million in savings resulting from lower than expected court operating expenses, $3 million in savings associated with lower than expected records management expenses, and $22 million in savings associated with lower than expected law book expenses. However, AOUSC officials provided documents that indicate that the law book reductions resulted in savings of $64.7 million (not adjusted for inflation), or $42.7 million more than the $22 million estimated by AOUSC officials. In addition, the operating expense savings estimate did not include the costs incurred by the judiciary to implement the initiatives, such as the costs of transitioning to contracts for electronic legal research resources, so the net cost savings the judiciary has achieved as a result of these efforts is unclear at this time. Estimating reliable cost savings is consistent with standards in Standards for Internal Control in the Federal Government. For example, Standards for Internal Control in the Federal Government states that program managers need complete and accurate operational and financial data to determine whether they are meeting their agencies’ strategic and annual performance plans and meeting their goals for accountability and for effective and efficient use of resources. In addition, internal control standards state that transactions and significant events should be clearly documented and the documentation should be readily available for examination. Further, cost-estimating guidance states that agencies should determine whether an activity’s benefits (savings) also take into account the costs incurred to implement the activity. In addition, best practices suggest that federal agencies should routinely identify cost savings and efficiencies, as we have previously concluded. The judiciary is not required by law to abide by Standards for Internal Control in the Federal Government or cost-estimating guidance, but these tenets are consistent with the management practices of leading organizations. As described above, on the basis of information provided by AOUSC officials, we determined that the methodology AOUSC officials used to estimate savings from staffing and salary reductions—or approximately $785 million of the nearly $1.5 billion total cost savings estimate—was reliable. However, as AOUSC officials acknowledged, the methodology for estimating the remaining approximately $677 million of the savings estimate has limitations. For example, the officials acknowledged that $291 million of the $538 million in space- and facilities-related savings resulted from lower than anticipated rent inflation and was not the result of judiciary actions. Also, AOUSC officials agreed that the amount of estimated rent savings for fiscal years 2005 to 2015 should include the amount saved for each fiscal year over the last 10 years and not only the savings for fiscal year 2015. In addition, AOUSC officials said that excluding the additional savings found in the information technology and operating expense categories was an oversight, and AOUSC is in the process of reconsidering how to portray its long-term savings estimates. According to the officials, these particular additional savings amounts will be included in the future. Furthermore, AOUSC officials acknowledged that they did not include the costs incurred to implement several of these initiatives, so the cost savings estimates do not always reflect net cost savings. According to AOUSC officials, the costs incurred to implement an initiative were not included in the nearly $1.5 billion savings estimate because the estimated savings are the result of national policies and initiatives that frequently have an element of local spending or operating expense, and AOUSC officials have not attempted to gather and calculate the implementation costs and link them to the specific savings estimates. Regarding the lack of documentation for $247 million in estimated space- and facilities-related savings, AOUSC officials stated that the numerous cost containment initiatives and policies implemented in this category since fiscal year 2005 have resulted in reduced space requirements and rent costs over time as the initiatives and polices have been implemented. However, they stated that the cost savings resulting from each initiative and policy cannot be measured directly. They stated that these initiatives and policies include the following, among others: Establishing the circuit rent budget process and rent budget caps intended to ensure consideration of all alternatives to increases in space requirements and cap rent growth, among other things. Closing nonresident court facilities in multiple locations nationally. Establishing the Rent Validation Initiative, which involved detailed reviews of GSA rent billings to ensure that they are based on agreed- upon rental rates for the space that the judiciary occupies, among other things. Establishing a goal of reducing the amount of total square footage leased from GSA by 3 percent by fiscal year 2018 (from the baseline footprint of fiscal year 2013). AOUSC officials stated that as projects mature and leased space is returned to GSA and others, they expect the judiciary to meet this goal. According to AOUSC officials, as of October 2014, approximately 242,403 square feet has been eliminated from the judiciary’s rent bill, resulting in a savings of almost $6 million annually. According to AOUSC officials, this reduction in the rent bill reflects actual space released back to GSA; however, it is not a net reduction to the rent bill because there have been some space increases to the judiciary’s inventory from new construction and alteration projects completed and occupied during the course of each year. AOUSC officials stated that it would be challenging, if not impossible, to precisely measure all cost savings attributable to each individual cost containment initiative for three reasons. First, AOUSC officials stated that AOUSC does not maintain a single, historical list of initiatives, although initiatives and some cost savings estimates are documented in a collection of documents such as the Cost Containment Strategy for the Federal Judiciary and congressional budget justifications. Second, AOUSC officials said that retroactively reporting on cost containment savings would be resource-intensive and would not add meaningful business value to its planning process. Third, AOUSC officials stated that under the judiciary’s decentralized funding structure, court units may receive reduced funding allotments because of a cost containment initiative or action, but courts have local flexibility to determine how to staff and support their offices within the allotted funds. AOUSC officials stated that under the decentralized model, courts are able to develop creative, local solutions that meet the demands of the court, but doing so makes it more challenging to determine the actual savings that are attributable to any individual initiative. Furthermore, according to AOUSC officials, the judiciary considers a cost containment initiative to be successful if the initiative slowed the projected cost growth or reduced a resource requirement and noted that the anticipated cost savings from individual cost containment initiatives is incorporated in its annual budget request estimates. Additionally, AOUSC officials stated that the collective effect of the cost containment initiatives undertaken by the judiciary may be seen in the judiciary’s annual budget request at the appropriation account level (e.g., Salaries and Expenses, Defender Services, Court Security). Overall, AOUSC officials said that the judiciary’s budget request increases have historically ranged from 7 to 9 percent, but in recent years its budget request increases have ranged from 3 to 5 percent. We reviewed the judiciary’s annual congressional budget justifications for fiscal years 2010 through 2016 and confirmed that congressional budget justifications did not consistently report information on cost containment initiatives or the estimated cost savings realized from the initiatives. For example, the congressional budget justifications included descriptive information about several cost containment initiatives implemented by the judiciary in recent years, but the estimated cost savings realized— cumulatively or from year to year—as a result of the initiatives were not always included. As a result, we could not use the congressional budget justifications to determine the cost savings the judiciary has realized from its cost containment initiatives. With regard to the decrease in the growth rate of the judiciary’s budget requests, many factors other than cost containment could influence a reduction in an agency’s or the judiciary’s budget request from year to year, which make it difficult to demonstrate that a slower rate of growth in the judiciary’s budget requests is the result of its cost containment initiatives. For example, the rate of inflation and other economic fluctuations, changes in the federal budgetary outlook, changes in workload, and changes in the political environment could affect the judiciary’s need or request for less money in a given fiscal year. We acknowledge that calculating cost savings estimates for every cost containment initiative could be resource-intensive and that calculating actual cost savings may be challenging. For example, retroactively reporting on cost savings for each individual cost containment initiative could be resource-intensive. Also, cost factors may change and data may be initially incomplete because savings may take several years to be fully realized. However, particularly in a time of constrained resources, developing a reliable method for estimating accurate and complete cost savings for major cost containment initiatives going forward and regularly reporting estimated cost savings by major cost containment initiative could help the judiciary better assess the effectiveness of its cost containment strategy and help inform decision making related to ongoing and new cost containment initiatives. Additionally, developing a reliable method for estimating cost savings by initiative and regularly reporting estimated cost savings could help improve the reliability of cost savings information the judiciary provides to Congress. For example, developing a reliable method for estimating accurate and complete cost savings for major cost containment initiatives could help address the limitations, noted earlier, of the cost savings estimates that constitute the cumulative cost savings estimate that the judiciary reports to Congress (such as the estimated cumulative cost savings from cost containment efforts implemented since fiscal year 2005). In addition, in the fiscal year 2015 appropriations act, Congress appropriated $10 million to remain available until September 30, 2016, to the judiciary for Integrated Workplace Initiative (IWI) costs (such as space construction projects and the purchase of furniture). Congress stipulated that these funds would not be available for obligation until the AOUSC Director submits a report to the House and Senate Committees on Appropriations showing that the estimated cost savings resulting from the IWI will exceed the estimated costs of the initiative. In March 2015, judiciary officials transmitted reports to the House and Senate Committees on Appropriations regarding the status of space reduction and IWI projects but reported it was too early to be able to provide specific details regarding rent cost savings from these projects until after the concept design phases for the projects are completed. Developing a reliable method for estimating cost savings achieved for major cost containment initiatives—which takes into account the costs to implement and all cost savings achieved—could help inform judiciary efforts to report space reduction- and IWI-related cost savings information to Congress. Furthermore, regularly reporting such cost savings for major cost containment initiatives could provide Congress with more accurate and complete information for oversight and decision making. Several cost-efficient options for developing a method to accurately estimate and regularly report cost savings for major cost containment initiatives exist. For example, one approach might be to estimate cost savings using a risk-based methodology to determine and track cost savings for those cost containment initiatives related to the judiciary’s highest-cost areas or those from which the judiciary anticipates the largest savings (or by major spending or major cost containment category). Another approach could be developing a method for estimating cost savings as part of existing processes and data collection and analysis activities, such as the judiciary’s budget formulation and execution process. Regularly reporting estimated cost savings achieved for major cost containment initiatives through an existing mechanism, such as congressional budget justifications or other document, could be another option, and reporting could be done on a periodic, but not necessarily annual, basis. In addition, adding features to the judiciary’s new financial management system to help facilitate the collection and analysis of cost and cost savings information from courts and defender organizations related to space and facilities initiatives and other initiatives is another option, if cost effective. Additionally, AOUSC is using a process to estimate the costs and estimated cost savings to meet congressional reporting requirements. Tailoring such a process to estimate cost savings for other major cost containment initiatives could be another option. The judiciary uses various mechanisms to identify opportunities for cost savings and increasing efficiencies, including: (1) strategic policy documents, (2) the annual budget formulation and execution process, (3) Judicial Conference and conference committee meetings, and (4) information sharing across federal courts. Strategic policy documents—In the past 10 fiscal years, the judiciary has developed various strategic policy documents that assist the judiciary with its efforts to contain costs—including identifying opportunities for cost savings and efficiencies, among other things, as described in table 2. As shown in table 2, the judiciary developed the Cost Containment Strategy for the Federal Judiciary: 2007 Update Report (2007 update) to provide a progress update on the Cost Containment Strategy for the Federal Judiciary: 2005 and Beyond (2005 cost containment strategy), including analyzing and documenting changes that occurred in the judiciary’s long-range budget forecasts and the status of implementing cost containment initiatives in each cost containment category, among other things. The judiciary has established timeframes for regularly updating its Strategic Plan for the Federal Judiciary and the Long Range Information Technology Plan, but has not updated its cost containment strategy since 2007. In July 2012, the judiciary issued a six-page Cost Containment Update: A Report from the Budget Committee, which provided an overview of the judiciary’s long-range budget forecasts and summarized some new cost containment initiatives (table 2). AOUSC officials told us that the judiciary does not plan to issue another update report on the 2005 cost containment strategy in the future primarily because the judiciary’s culture has changed in the past 10 years, and the judiciary relies on other mechanisms, described below, to identify opportunities for cost savings and efficiencies. Annual budget formulation and execution process—According to Judicial Conference and AOUSC officials, the judiciary’s annual process of preparing its budget and allocating funding, or its budget formulation and execution process, is the primary mechanism it uses to identify opportunities for judiciary-wide cost savings and efficiencies. For example, AOUSC officials told us that the judiciary’s initiative to reduce all judiciary-occupied space by 3 percent by the end of fiscal year 2018 was identified through the budget formulation and execution process. The Judicial Conference, operating through a network of program committees, oversees the development and execution of the judiciary’s budget, as shown in figure 6. Accordingly, the Judicial Conference Budget Committee is responsible for proposing appropriate funding levels, based, in part, on annual long-range budget forecasts (i.e., how budget requirements and potential funding levels may change during the next 5 to 10 years), and input from program committees. The Economy Subcommittee of the Budget Committee also plays a key role in working with program committee chairs to identify, recommend, and promote budget-balancing strategies or cost containment initiatives. During the budget execution process, the Judicial Conference Executive Committee is to approve annual financial (spending) plans for 4 of the 12 judiciary appropriations accounts. According to judiciary officials, the annual financial plans reflect the policies of the Judicial Conference, including approved cost containment initiatives, among other things. Our analysis of the judiciary’s financial plans from fiscal years 2010 through 2015 showed that these plans contained some information about the cost containment initiatives that the judiciary approved. For example, under the Defender Services account, the fiscal year 2015 plan states that funding was provided for the conversion of two part-time case-budgeting attorney positions into full-time positions, among other cost containment initiatives. The budget formulation process begins 18 months before the fiscal year. Figure 6 depicts general time frames and activities that may overlap throughout the process. Judicial Conference and committee meetings—According to the Judicial Conference Budget Committee and Economy Subcommittee chairs and AOUSC officials, the Judicial Conference semiannual sessions in March and September provide all Judicial Conference program committee chairs with the opportunity to discuss the status of new and ongoing cost containment efforts, among other national judiciary policy matters. Judiciary officials told us that these discussions were documented in the Reports on the Proceedings of the Judicial Conference of the United States issued after each semiannual session, and we verified this statement through our analysis of these documents for fiscal years 2007 through 2014. Various Judicial Conference committee chairs also meet during these semiannual Judicial Conference sessions and throughout the year to support the judiciary’s annual budget formulation and execution process, as described earlier. For example, the Chair of the Economy Subcommittee of the Budget Committee told us he regularly meets with program committees to educate them on ways to contain costs, which include in-depth discussions of (1) each program’s budget, (2) the status of new and ongoing cost containment initiatives (including the extent to which the cost containment initiative has reduced costs), and (3) steps the program committee has taken to address long-range budget forecasts. Also, the Judicial Conference usually holds a long-range planning meeting 1 day prior to one or both semiannual Judicial Conference sessions. As shown in figure 6, the long-range planning meeting is not a formal part of the budget formulation and execution process, but, according to the Budget Committee Chair, provides an opportunity for program committee chairs to discuss judiciary-wide trends and long-range planning issues that are crosscutting within the judiciary (i.e., issues that may affect more than one program committee, such as increasing space and facilities costs). The Chair of the Budget Committee told us these meetings are typically focused on strategic planning and some, but not all, over the past 3 to 4 fiscal years involved discussions of budgetary matters and the potential implications of budget reductions. In addition, the Judicial Conference Executive Committee and Budget Committee held a cost containment summit with program committee chairs in September 2011. The purpose of the summit was to respond to anticipated budgetary shortfalls in fiscal year 2013 and beyond by identifying potential cost containment initiatives that would help mitigate funding cuts to the courts and avoid further loss of staff. For example, the Judicial Conference approved lowering the budget cap for Defender Services and Court Security during the March 2012 Judicial Conference semiannual session. The Budget Committee documented this and other cost containment initiatives identified at the summit in the Cost Containment Update: A Report from the Budget Committee, described in table 2, and subsequent Reports on the Proceedings of the Judicial Conference of the United States. Information sharing across federal courts—As noted earlier, according to AOUSC officials, the decentralized governance and budgetary structure of the judiciary allows courts and defender organizations to identify opportunities for cost savings and efficiencies to meet local needs. The judiciary has taken steps to facilitate the identification and sharing of ideas for cost savings and efficiencies among federal courts and defender organizations using various information-sharing mechanisms. For example, officials we interviewed in 8 of 12 circuits, all 4 district courts, and 3 of 4 defender organizations stated that they coordinate with the Judicial Conference committees and AOUSC (such as through AOUSC advisory councils, peer advisory groups, or ad hoc working groups) to identify opportunities for cost savings and efficiencies. For example, an official in one circuit court told us that court officials leverage the semiannual Judicial Conference sessions to meet with their counterparts in other circuit courts to share cost saving and efficiency ideas. Also, officials representing 10 of 12 circuits, all 4 district courts, 2 of 4 bankruptcy courts, and 3 of 4 defender organizations we met with stated that they have regular meetings with colleagues to share ideas about cost-saving and efficiency opportunities. For example, the probation services office and pretrial services office in one district court developed a Budget Consortium to formulate cost savings ideas, such as combining bulk supply purchases to reduce costs. Furthermore, through our interviews with court officials, we learned that some of the opportunities for cost savings and efficiencies identified by local courts have led to national implementation. For example, one official in a district court clerk’s office told us that the court codeveloped a software system that automates criminal debt and restitution processes, which it has been using since 2008 to streamline the process of collections and accounting—thereby increasing processing efficiency and saving costs. In addition, the district court developed guidance for local courts to implement the software system, and according to the district official, other courts began to use the system in June 2013. The officials stated that, as of August 2014, 80 district courts were using one component of the software to import Bureau of Prisons and U.S. Department of Treasury offset payments, and beginning in early 2015, approximately 30 courts received access to all software components (with the actual extent of use of the components varying from court to court). According to AOUSC and court officials we interviewed, the judiciary’s cost containment initiatives helped to prepare the judiciary for potential budget reductions, but the judiciary still needed to impose a set of emergency measures to achieve the $346 million in budget cuts caused by the 2013 sequestration and faced some planning challenges. According to the judiciary budget officer, the Judicial Conference Executive Committee began to plan for sequestration in January 2012, and the judiciary implemented a final set of emergency measures in March 2013, when sequestration ultimately took effect. The judiciary budget officer and some court and defender organization officials we interviewed stated that planning for the reductions resulting from sequestration was challenging because the estimated percentage reductions changed several times. Figure 7 provides a detailed timeline of judiciary, OMB, and legislative actions taken to prepare for the fiscal year 2013 sequestration and the lapse in fiscal year 2014 appropriations. As previously noted, the 2013 sequestration reduced fiscal year 2013 funding for the judiciary’s Salaries and Expenses account by $239 million; Defender Services account by almost $52 million; Court Security account by $25 million; and Fees for Jurors and Commissioners account by approximately $3 million, among other reductions. To achieve these reductions, the judiciary identified approximately 33 emergency measures that generally reduced or postponed funding for the remainder of fiscal year 2013 in each of these accounts and reprogrammed available funds (such as prior-year unobligated balances) to areas of the fiscal year 2013 financial plan to mitigate shortfalls. According to Judicial Conference officials, the judiciary designed the emergency measures to address the four main appropriations accounts and to help ensure consistency and equity among members of the judiciary. They stated that many of the measures were temporary, one-time reductions that could not be repeated if future funding levels continued to decline. Table 3 shows examples of the emergency measures the judiciary implemented to achieve the reductions required by the fiscal year 2013 sequestration. As shown in figure 7, the judiciary kept the emergency measures in place until the enactment of fiscal year 2014 appropriations, which returned funding to presequestration levels, or approximately fiscal year 2010 levels, because the judiciary received relatively flat funding in fiscal years 2011 and 2012. Under the judiciary’s decentralized governance structure, individual courts and defender organizations made local decisions about how to manage staff and operations within the reduced allotments imposed by the emergency measures and about any additional spending cuts or actions that may be needed. For example, to absorb reduced salary allotments, courts and defender organizations determined whether they needed to downsize, implement furloughs, a combination of both, or neither, or take other personnel actions, such as freezing promotions, offering buyout and early retirement offers, or implementing layoffs (i.e., involuntary separations), among other actions. AOUSC officials stated that some courts cut hours of operation, closed 1 day per week, or chose not to hear criminal cases every other Friday. See appendix I for examples of the personnel and related actions that the 12 circuit courts, 4 district court clerks’ offices, 4 bankruptcy courts, 4 probation and pretrial offices, and 4 defender organizations we interviewed reported taking in response to the 2013 sequestration. The emergency measures also reduced nonsalary (i.e., operations and IT) allotments to most court units by 20 percent and to bankruptcy court clerks’ offices by 34 percent. To absorb these reductions, circuit court and district court officials we interviewed told us they reduced staff training and travel, entered into bulk purchase agreements to acquire supplies, and postponed building renovations and maintenance, among other actions. In addition, officials we interviewed in 9 of 12 circuit courts, all 4 district courts, all 4 bankruptcy courts, and all 4 defender organizations stated that they rescoped or delayed cyclical IT replacements (e.g., laptops, printers) and maintenance (e.g., payments for extended warranties) or reduced IT investments in response to the 2013 sequestration. See appendix I for examples of the nonpersonnel actions each court and defender organization we interviewed reported taking. The judiciary implemented actions to help mitigate the impact of sequestration on court and defender organization staff, but officials reported that reprogramming or reducing funding in other areas interrupted cost containment efforts and led to increased costs and inefficiencies. For example, the Judicial Conference Executive Committee reduced nonsalary funding—such as funding for training, IT, supplies, and equipment—and used funding flexibilities (such as prior-year unobligated balances and fee collections) to help centrally fund the resource requirements identified in the judiciary’s fiscal year 2013 financial plan. However, AOUSC officials and court and defender organization officials we interviewed in 3 of 12 circuit courts, 2 of 4 district courts, 3 of 4 bankruptcy courts, and 1 of 4 defender organizations stated that diverting funds from IT investments and travel and training as a result of sequestration interrupted cost containment and efficiency efforts, and led to increased costs and risks in some cases. For example, according to AOUSC officials, upgrades to several national IT systems designed to achieve cost savings or improve internal controls—such as to judiciary financial management, human resources, and probation and pretrial case management systems; a national videoconferencing system; and a new national Internet Protocol telephone system—were suspended in fiscal year 2013 because of sequestration. According to the officials, restarting upgrades after they have been suspended for some time is costly, and many upgrade projects have still not been completed. For example, AOUSC officials stated that the delayed rollout of an upgraded financial management system to all courts introduces the risk of technical obsolescence of the legacy financial accounting system, which has the potential to introduce new costs to keep the legacy system operational. In addition, court and defender organization officials stated that they participate in information-sharing and training conferences and meetings—such as circuit judicial conferences and annual or biannual court clerks conferences—to stay proficient in their subject matters and to discuss court administration, including sharing ideas for saving money and increasing efficiency. However, because of reduced funds for travel and training, officials representing 6 of 12 circuit courts stated that they canceled or postponed circuit judicial conferences in 2013 and 2014. Also, officials in 6 of 12 circuit courts, 3 of 4 district courts, 2 of 4 bankruptcy courts, and 2 of 4 defender organizations stated that they canceled, reduced, or did not attend training conferences or meetings (e.g., for judges, staff attorneys, defenders, court clerks, and IT staff). Moreover, AOUSC officials estimated that approximately 2,585 federal defender and Criminal Justice Act panel attorneys and paralegals, investigators, and staff did not receive subject matter training (such as substantive legal, forensics, and case management training) as a result of canceled training events because of sequestration and threat of continued sequestration in fiscal year 2013 and early 2014. Judiciary officials reported that the 2013 sequestration and fiscal year 2014 lapse in appropriations negatively affected court and defender organization personnel and services to the public, among other effects. Reduced court staff and implemented furloughs—AOUSC officials stated that one of the most significant effects of the 2013 sequestration was the continuing loss of court staff through attrition, including buyouts and voluntary early retirements, among other actions. According to GAO analysis of judiciary data, in the 12 months following sequestration, total onboard full-time equivalent staff in federal courts nationwide declined by nearly 1,600 full-time equivalent staff—or by approximately 8 percent. According to GAO analysis of judiciary data, from fiscal years 2011 to 2014 (including the 2 years of relatively flat funding preceding sequestration), the total number of onboard full-time equivalent staff in federal courts nationwide declined by more than 11 percent—specifically, the total number of onboard full-time equivalent staff declined by 11 percent in circuit courts, 8 percent in district court clerks’ offices, 24 percent in bankruptcy courts, and 7 percent in probation and pretrial services offices nationwide. Furthermore, the judiciary reported that, nationally, by the end of March 2014, there were 3,300—or 15 percent—fewer onboard court staff than in July 2011. GAO analysis of judiciary data supports this statement. See figure 8 for the total number of onboard court full-time equivalent staff in circuit courts, district court clerks’ offices, probation and pretrial services offices, and bankruptcy courts from fiscal years 2010 to 2014. To help manage within the reduced salary allotments, some courts and federal defender organizations offered buyouts, early retirement offers, or a combination of both to employees. AOUSC provided supplemental funding to courts and defender organizations that requested funding and met certain criteria to help pay for the these actions. Table 3 shows the total number of buyouts, early retirement offers, and combined buyout and early retirement offers approved by AOUSC and federal public defender organizations to offer locally to staff in fiscal years 2013 and 2014, according to GAO analysis of judiciary data. Furthermore, 5 of 12 circuit courts, all 4 bankruptcy courts, and 3 of 4 defender organizations reported implementing a reduction in force in response to the 2013 sequestration—some of which involved the involuntary separation of employees. Specifically, officials in 3 of 12 circuit courts and 1 federal defender organization stated they implemented a reduction in force that did not result in any involuntary separations. Officials in 2 of 12 circuit courts, all 4 bankruptcy courts, and 2 of 4 defender organizations stated that they implemented a reduction in force in response to the 2013 sequestration that included at least one involuntary separation. In addition, in fiscal year 2013, federal courts and federal defender organizations furloughed a combined total of more than 3,600 staff (table 5), resulting in reduced wages. Specifically, according to GAO analysis of judiciary data, circuit courts, district courts (including probation and pretrial services offices), and bankruptcy courts furloughed approximately 1,400 staff for 1 to 15 days in fiscal year 2013. Also, according to GAO analysis of judiciary data, over half of the country’s federal public defender organizations furloughed a total of about 2,000 staff for 1 to 16 days in fiscal year 2013. Additionally, according to GAO analysis of community defender organization data, community defender organizations furloughed 219 staff from 3 to 14 days. None of the circuit courts, district courts, and bankruptcy courts we interviewed implemented furloughs; however, two of the four defender organizations we interviewed implemented furloughs, which resulted in lost wages for furloughed federal defender organization staff. Defender services—reduced staff, implemented furloughs, and postponed and reduced payments—According to AOUSC officials, the Defender Services account primarily pays for defense attorneys and staff salaries, rent, case-related expenses (such as expert witnesses and interpreters), and Criminal Justice Act panel attorney payments. As a result, they stated, there is less flexibility for absorbing budget reductions other than reducing or furloughing staff, or reducing or postponing panel attorney pay. According to GAO analysis of judiciary data and community defender organization data, federal public defender and community defender organizations downsized by a net total of approximately 316 total onboard full-time equivalent staff—250 federal public defender and 66 community defender full-time equivalent staff—from the end of fiscal year 2012 to the end of fiscal year 2014. See figure 9 for the total number of onboard full-time equivalent staff in federal defender organizations as of the end of fiscal years 2010 to 2014. Furthermore, according to AOUSC officials, payments to panel attorneys were postponed for the last 10 business days of fiscal year 2013 into fiscal year 2014. In addition, because of the lapse in appropriations at the beginning of fiscal year 2014, the officials stated that payments to panel attorneys were further delayed. Moreover, to maintain projected onboard defender office staffing nationally as of the beginning of fiscal year 2014, the Executive Committee imposed a temporary emergency hourly rate reduction for panel attorneys of $15 an hour from September 1, 2013, to September 30, 2014. According to an AOUSC official, this was the first time the judiciary had to reduce the hourly panel attorney rate in 50 years, instead of postponing payments as had been done in the past to address shortfalls, an action that significantly reduced panel attorney morale in the short and long terms. Probation and pretrial services—reduced staff, mental health and drug testing and treatment services, and law enforcement training—According to GAO analysis of judiciary data, total onboard probation and pretrial services full-time equivalent staff in district courts nationwide declined by about 400 full-time equivalent staff in fiscal year 2013. Also, the judiciary reduced funding for law enforcement–related expenses—including substance abuse testing and treatment, mental health treatment, and electronic monitoring of federal defendants and offenders on supervised release, among other expenses—by 20 percent compared with funding in the interim fiscal year 2013 plan. According to AOUSC officials, reduced funding for probation and pretrial officer staff throughout the courts equates to less deterrence, detection, and response to possible criminal activity by federal defendants and offenders in the community. In addition, probation and pretrial officials we interviewed in all 4 district courts stated that staff reductions and reduced funding for treatment limited efforts to reduce recidivism, and some noted increased potential risks to public safety. Furthermore, according to judiciary training records and AOUSC officials, the judiciary suspended 4 of 10 planned new officer training courses in 2012 and 2 of 9 new officer courses at the Federal Law Enforcement Training Center in Charleston, South Carolina, in 2013 as a result of the sequestration. According to AOUSC officials, the suspended courses have led to a 13-month backlog of required law enforcement training for new probation officers, which means that some new probation and pretrial services officers had been supervising defendants and offenders on supervised release without basic law enforcement training, putting their lives and public safety at risk. Reduced services to the public—Officials we interviewed representing 6 of 12 circuit courts, 1 district court, and 2 of 4 defender organizations reported that they reduced court services to the public, such as reducing the number of hours open to the public, as a result of sequestration. Furthermore, AOUSC officials reported that courts across the country reduced their court hours or services as a result of sequestration (such as not holding hearings or trials of criminal cases on Fridays because federal defenders and U.S. Attorney’s Office staff were furloughed), but AOUSC does not maintain nationwide data on the total number of court closures or number of reduced hours. Reduced court security—According to the judiciary’s fiscal year 2013 financial plan, to implement the $25 million reduction to court security resulting from sequestration, the judiciary reduced funding for security systems and equipment by approximately 25 percent, or about $13 million; reduced court security officer hours by 25 hours per officer per year ($4.3 million); and reduced funding for DHS Federal Protective Service building security services by $1 million. According to U.S. Marshals Service officials, security system funding reductions most affected funding for additional and replacement equipment, perimeter security, and access control systems. Reduced employee morale, recruitment, and retention—Court officials we interviewed in 8 of 12 circuit courts, 2 of 4 district courts, 2 of 4 bankruptcy courts, and all 4 defender organizations reported that reduced staffing levels because of the 2013 sequestration have led to other negative effects, including increased workloads, decreased morale, and retention and recruitment challenges. For example, officials we interviewed in 6 of 12 circuit courts, 3 of 4 district courts, 2 of 4 bankruptcy courts, and all 4 defender organizations stated that the 2013 sequestration or the lapse in fiscal year 2014 appropriations weakened employee morale, on the basis of their observations and interactions with employees. The Budget Committee Chair told us that increased funding for the judiciary in fiscal years 2014 and 2015 has allowed courts to begin filling vacant positions, but that most courts have been concerned about increasing their staff levels after the experience of the 2013 sequestration and because of fear of future budget reductions. Increased median civil case disposition times, though other factors could contribute—AOUSC officials reported that the median civil case disposition time for district courts increased about 16 percent—from 7.3 months to 8.5 months—from October 1, 2011, to September 30, 2013. GAO analysis of judiciary data supports this statement. Judicial Conference and AOUSC officials stated that the 2013 sequestration probably contributed to these delays, but the judiciary has not conducted analyses to isolate the effects of sequestration on civil case disposition times. According to an AOUSC official, years of relatively flat budgets in fiscal year 2011 and 2012, actions taken to implement sequestration, and federal judgeship vacancies all may have contributed to the civil case disposition time increases, making it difficult to identify which one or more of these factors may be causing an increasing backlog of cases and growing wait times. Additionally, officials we interviewed in 7 of 12 circuit courts and 3 of 4 district courts stated that case disposition in their courts was delayed as a result of the 2013 sequestration. For example, officials we met with in 1 district court stated that the district court prioritized criminal trials and postponed civil jury trials because, under the Speedy Trial Act of 1979, courts are required to hold criminal trials within specified time frames. However, the district court clerk stated that judicial vacancies also may contribute to the court’s ability to hear civil cases in a timely manner. Furthermore, in October 2013, during the 16-day lapse in appropriations, the judiciary was able to continue operating using filing fee collections and no-year funds. Nonetheless, officials we interviewed in 8 of 12 circuit courts, 3 of 4 district courts, and 1 of 4 bankruptcy courts reported that the lapse in appropriations still contributed to case-processing delays, or that uncertainty due to the potential lapse led to other negative effects on operations, such as wasted time in planning for a potential lapse. For example, officials we interviewed in 7 of 12 circuit courts, 2 of 4 district courts, and 1 of 4 bankruptcy courts stated they received civil case motions to stay, or suspend, cases from DOJ because U.S. Attorneys or U.S. Trustees were not available and had to postpone other cases because federal defenders were furloughed. For example, one district court clerk stated that her office had to process and docket the DOJ orders to suspend about 200 cases, then her office had to file motions to “unstay,” or remove from suspension, the orders and catch up on the approximately 200 cases when the appropriations lapse ended, an action that she said was very cumbersome and inefficient. Identifying and implementing actions to save costs and reliably estimating cost savings achieved is critical to helping the judiciary and Congress assess the progress of cost containment initiatives and identify available resources in a constrained budgetary environment. In September 2004, the Judicial Conference approved a Cost Containment Strategy for the Federal Judiciary: 2005 and Beyond to help slow the growth of its major cost drivers—including rent and personnel costs—and the judiciary has implemented a wide range of initiatives in these and other major cost containment categories over the past 10 years. According to AOUSC, court, and defender organization officials we interviewed, several of these initiatives helped to mitigate the negative effects of the 2013 sequestration. However, the judiciary does not fully know how much money it has saved as a result of its cost containment initiatives because it has not developed a reliable method of estimating cost savings achieved for major initiatives. For example, the judiciary estimated that it avoided nearly $1.5 billion from fiscal year 2005 through fiscal year 2015 primarily as a result of its cost containment initiatives. However, we found that this estimate has limited reliability because it did not include all savings realized, included savings not attributable to cost containment initiatives, did not always include the costs associated with implementing initiatives, and was not always well documented to support estimated savings. Developing a reliable method to estimate cost savings achieved for major initiatives and regularly reporting such cost savings could provide the judiciary and Congress with more accurate and complete financial information for assessing the progress of the judiciary’s cost containment initiatives, informing judiciary decision making related to its initiatives, and informing congressional oversight and decision making to help ensure that the judiciary continues to render justice in a fair, timely, and efficient manner. To provide more reliable information for assessing the progress of its cost containment efforts and for informing judiciary and congressional oversight and decision making, we recommend that the Director of AOUSC take the following two actions for major cost containment initiatives (as determined by the judiciary): develop a reliable method for estimating cost savings achieved (i.e., that ensures that cost savings are calculated in an accurate and complete manner); and regularly report estimated cost savings achieved. We provided copies of a draft of this report to AOUSC, the Federal Judicial Center, the U.S. Sentencing Commission, GSA, and the Marshals Service for review and comment. These agencies provided technical comments that we incorporated as appropriate. AOUSC provided written comments on a draft of this report, which are printed in full in appendix IV. In its comment letter, AOUSC stated that the judiciary appreciates and takes seriously the recommendations and findings in the report and will give them careful consideration. Specifically, AOUSC commented that improvements can always be made to administrative and accounting processes to improve further the judiciary’s reporting on cost containment activities. According to AOUSC, in a time of constrained resources, however, the expenditure of resources to develop new methodologies for cost savings estimates must align with the judiciary’s business needs. AOUSC said that the judiciary will carefully evaluate any additional methods for estimating cost savings to ensure that a strong business case justifies the expenditure of scarce resources for that purpose and that any new reporting is cost effective and of direct use to the judiciary and Congress. As we stated in the report, developing a reliable method for estimating accurate and complete cost savings for major cost containment initiatives could help the judiciary better assess the effectiveness of its cost containment strategy and help inform decision making related to ongoing and new cost containment initiatives. This is especially important in a time of constrained resources. Additionally, developing a reliable method for estimating accurate and complete cost savings for major cost containment initiatives and regularly reporting such cost savings estimates could help the judiciary provide Congress with more accurate and complete financial information for oversight and decision making. Furthermore, we identified several potential cost-effective approaches that the judiciary might consider for developing a reliable method for estimating and reporting cost savings from major cost containment initiatives. In addition, AOUSC commented that the draft report’s emphasis on retroactive cost estimating may give the appearance of undervaluing the judiciary’s long-term budget planning and its 10 years of cost containment activity, which enabled the judicial branch to continue to serve the public during sequestration. We believe that the draft report acknowledges and values the judiciary’s long-term budget planning and its 10 years of cost containment activity. Specifically, the draft report identifies and describes the judiciary’s long- range budget planning process and strategic policy documents, such as the Cost Containment Strategy for the Federal Judiciary: 2005 and Beyond, among others, as mechanisms the judiciary uses to identify opportunities for cost savings and efficiencies and describes several examples of the cost containment initiatives that the judiciary has undertaken in the past 10 years, including a list of multiple examples of the judiciary’s cost containment initiatives in all categories in appendix III. Further, we report that the judiciary’s cost containment initiatives helped to prepare the judiciary for potential budget reductions, according to AOUSC and court officials we interviewed. The report also includes examples of cost-saving actions that courts and defender organizations we interviewed took in the years prior to sequestration that helped to mitigate the negative effects of sequestration, according to these entities (for example, see app. I). We are sending copies to the appropriate congressional committees and the Director of AOUSC, Director of the Federal Judicial Center, Chair of the U.S. Sentencing Commission, the Attorney General, and the Administrator of GSA. In addition, this report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9627 or MaurerD@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. This appendix contains tables showing examples of the personnel and nonpersonnel actions that the officials we interviewed in 12 circuit courts; 4 district courts, including 4 bankruptcy courts and 4 probation and pretrial offices; and 4 defender organizations reported taking in response to the 2013 sequestration. 2013 across- the-board rescission (0.2%) Retirement Fund costs increased as a result of an actuarial experience study that shows increased longevity, the recent litigation regarding judges’ pay, and the lower discount rates developed and published by the Office of Personnel Management. This includes nonexempt mandatory spending in the Judiciary Filing Fees and Registry Administration accounts. Table 16 shows examples of the cost containment initiatives the judiciary has under way, in all categories, as of July 2015, and the year the judiciary began implementing the initiative. In addition to the contact named above, Glenn Davis (Assistant Director), David Alexander, Chuck Bausell, Jennifer Bryant, Keith Cunningham, Elizabeth Curda, Katherine Davis, George Depaoli, Gustavo Fernandez, Eric Hauswirth, Leslie Gordon, Kristen Kociolek, Thomas McCabe, Linda Miller, Michelle Sager, Janay Sam, Lauren Sherman, Janet Temko- Blinder, and Ellen Wolfe made key contributions to this report.
In March 2013, the President ordered spending reductions, known as sequestration, across the federal government. As a result, the federal judiciary's resources were reduced by about $346 million over the remainder of fiscal year 2013. The judiciary has been affected by decreasing federal resources, such as the sequestration, and has been implementing various cost containment initiatives. GAO was asked to evaluate judiciary cost savings actions and the effects of the 2013 sequestration. This report examines, among other things, (1) judiciary actions to achieve cost savings and efficiencies, and the extent to which the judiciary has estimated cost savings; and (2) judiciary actions to implement the 2013 sequestration and any effects from these actions on judiciary personnel and operations. GAO analyzed relevant judiciary documents and collected information from and interviewed judiciary officials in all 12 regional circuit courts and the district court, bankruptcy court, and federal defender organization in four judicial districts, selected to obtain a diverse group of districts on the basis of funding level, among other factors. The federal judiciary has implemented cost containment initiatives for over 10 years, but the judiciary does not fully know how much it has saved because it has not developed a reliable method for estimating cost savings achieved. For example, GAO found that the judiciary's estimate of cost savings primarily attributed to cost containment initiatives since fiscal year 2005—nearly $1.5 billion, relative to projected costs—does not include all savings realized from cost containment initiatives, includes amounts that did not result from initiatives, does not always include the costs associated with implementing initiatives, and was not always supported by adequate documentation. Examples of cost-saving initiatives are establishing rent budget caps and providing incentives to courts for work efficiency. Judiciary officials confirmed, for example, that $291 million of the $538 million in space and facilities estimated savings is the result of lower than anticipated rent inflation. Also, an estimated $89 million in savings resulting from information technology (IT) initiatives did not include all savings (such as savings from an IT-based solution to manage and administer the jury function) or provide adequate documentation of costs to implement the initiatives. Judiciary officials stated that they discuss cost containment initiatives in the judiciary's congressional budget justifications, among other documents. GAO analyzed the judiciary's congressional budget justifications and found that these documents did not consistently report information on cost savings achieved for major initiatives. Reliable information on and reporting of estimated cost savings achieved for major initiatives could help the judiciary better assess the progress of its initiatives and help inform congressional oversight and decision making. The judiciary imposed emergency measures in response to the 2013 sequestration and has identified negative effects of the sequestration on the judiciary. Examples of emergency measures were postponing and reducing payments to private attorneys representing individuals who cannot afford counsel in criminal cases. One of the most significant effects of sequestration cited by judiciary officials was continued court staff loss. According to GAO analysis of judiciary data, in the 12 months following sequestration, total onboard court full-time equivalent staff declined by nearly 1,600—or about 8 percent (see fig.). Also, over 3,600 court and defender organization staff were furloughed in fiscal year 2013. Funding for expenses such as drug abuse treatment for offenders was reduced by 20 percent. Further, according to judiciary officials, some courts and defender organizations reduced services, such as closing 1 day per week. Total Onboard Court Full-Time Equivalent Staff, as of End of Fiscal Years 2010 to 2014 GAO recommends that the Director of the Administrative Office of the United States Courts (AOUSC) take the following two actions for major cost containment initiatives: (1) develop a reliable method for estimating cost savings achieved, and (2) regularly report estimated cost savings achieved. AOUSC said it will seriously consider GAO's recommendations.
The World Bank and IMF have classified 42 countries as heavily indebted and poor; three quarters of these are in Africa. In 1996, creditors agreed to create the HIPC Initiative to address concerns that some poor countries would have debt burdens greater than their ability to pay, despite debt relief from bilateral creditors. In 1999, in response to concerns about the continuing vulnerability of these countries, the World Bank and the IMF agreed to enhance the HIPC Initiative by more than doubling the estimated amount of debt relief and increasing the number of potentially eligible countries. A major goal of the HIPC Initiative is to provide recipient countries with a permanent exit from unsustainable debt burdens. To date, 27 poor countries have reached their decision points, and 11 of these have reached their completion points. In 1996, to help multilateral creditors meet the cost of the HIPC Initiative, the World Bank established a HIPC Trust Fund with contributions from member governments and some multilateral creditors. The HIPC Trust Fund has received about $3.4 billion (nominal) in bilateral pledges and contributions, including $750 million in pledges from the U.S. government. The World Bank, AfDB, and IaDB face a combined financing shortfall of $7.8 billion in present value terms under the existing HIPC Initiative (see table 1). Financing the enhanced HIPC Initiative remains a major challenge for the World Bank. The total cost of the enhanced HIPC Initiative to the World Bank for 34 countries is estimated at $9.5 billion. As of June 30, 2003, the World Bank had identified $3.5 billion in financing, resulting in a gap of about $6 billion (see table 1). Donor countries will be reviewing the financing gap during the IDA-14 replenishment discussions beginning in spring 2004. If donor countries close the financing gap through future replenishments, we estimate that the U.S. government could be asked to contribute $1.2 billion, which is based on its historical replenishment rate of 20 percent to IDA. Over 70 percent of the funds IDA has identified thus far come from transfers of IBRD’s net income to IDA. Although IBRD has not committed any of its net income for HIPC debt relief beyond 2005, we estimate that the financing gap of $6 billion could be reduced to about $3.5 billion, or by about 42 percent, if the net income transfers from the IBRD continue. Similarly, the U.S.’s potential share decreases by the same percentage, from $1.2 billion to about $700 million. However, transferring more of IBRD’s net income to HIPC debt relief could come at the expense of other IBRD priorities. The total cost of the enhanced HIPC Initiative to the AfDB for its 32 member countries is estimated at about $3.5 billion (see table 1). As of September 2003, the AfDB has identified financing of approximately $2.3 billion, including $2 billion from the HIPC Trust Fund and about $300 million from its own resources. Thus, AfDB is faced with a financing shortfall of about $1.2 billion in present value terms. We estimate that AfDB will need about $400 million to cover its shortfall for its 23 eligible countries, as well as about $800 million for its 9 potentially eligible countries. In addition, we estimate that the U.S. share of the AfDB’s financing shortfall is between $132 and $348 million, depending on the method used to close the $1.2 billion shortfall. The IaDB expects to provide about $1.4 billion for HIPC debt relief to four countries—Bolivia, Guyana, Honduras, and Nicaragua. Most of the relief is for debt owed to the Fund for Special Operations (FSO), the concessional lending arm of the IaDB that provides financing to the bank’s poorer members. As of January 2004, the IaDB has identified financing for the full $1.4 billion, about $200 million from donor contributions through the HIPC Trust Fund and $1.2 billion through its own resources. Although the IaDB is able to cover its full participation in the HIPC Initiative, the institution faces about a $600 million reduction in the lending resources of its FSO lending program from 2009 through 2019 as a direct consequence of providing HIPC debt relief. According to IaDB officials, the FSO will not have enough money to lend from 2009 through 2013. To eliminate this shortfall, donor countries may be asked to provide the necessary funds through a future replenishment contribution. Assuming that donor countries agree to close the financing gap, we estimate that the U.S. government could be asked to contribute about $300 million so that the FSO can continue lending to poor countries after 2008. This estimate is based on the 50-percent rate at which the United States historically contributes to the FSO. The $7.8 billion shortfall for the three MDBs is understated for two reasons. First, the estimated financing shortfall for two institutions—IDA and the AfDB—is understated because the data for four likely recipient countries—Laos, Liberia, Somalia, and Sudan—are unreliable. The World Bank considers existing estimates of the countries’ total debt and outstanding arrears to be incomplete, subject to significant change, and it is uncertain when the countries will reach their decision points. Similarly, the estimated costs of debt relief for three of AfDB’s countries—Liberia, Somalia, and Sudan—are likely understated due to data reliability concerns. Second, the financing shortfall does not include any additional relief that may be provided to countries because their economies deteriorated since they originally qualified for debt relief. Under the enhanced HIPC Initiative, creditors and donors could provide countries with additional debt relief above the amounts agreed to at their decision points, referred to as “topping up.” This relief could be provided when external factors, such as movements in currency exchange rates or declines in commodity prices, cause countries’ economies to deteriorate, thereby affecting their ability to achieve debt sustainability. The World Bank and IMF project that seven to nine countries may be eligible for additional debt relief, and their preliminary estimates range from $877 million to about $2.3 billion, depending on whether additional bilateral relief is included or excluded from the calculation. The additional cost to the U.S. government could range from $106 million to $207 million for assistance to the World Bank and AfDB, based on the U.S. historical replenishment rates to these banks. Furthermore, the topping-up estimate considered only the 27 countries that have reached their decision or completion point; the estimate may rise as additional countries reach their decision points. Even if the $7.8 billion shortfall is fully financed, we estimate that, if exports grow slower than the World Bank and IMF project, the 27 countries that have qualified for debt relief may need more than $375 billion in additional assistance to help them achieve their economic growth and debt relief targets through 2020. This $375 billion consists of $153 billion in expected development assistance, $215 billion in assistance to fund shortfalls from lower export earnings, and at least $8 billion for debt relief (see fig. 1). If the United States decides to help fund the $375 billion, we estimate it would cost approximately $52 billion over 18 years. According to our analysis of World Bank and IMF projections, the expected level of development assistance for the 27 countries is $153 billion through 2020. This estimate assumes that the countries will follow their World Bank and IMF development programs, including undertaking recommended reforms. It also assumes that countries achieve economic growth rates consistent with reducing poverty and maintaining long-term debt sustainability. These conditions will help countries meet their development objectives, including the Millennium Development Goals that world leaders committed to in 2000. These goals include reducing poverty, hunger, illiteracy, gender inequality, child and maternal mortality, disease, and environmental degradation. Another goal calls on rich countries to build stronger partnerships for development and to relieve debt, increase aid, and give poor countries fair access to their markets and technology. We estimate that 23 of the 27 HIPC countries will earn about $215 billion less from their exports than the World Bank and IMF project. The World Bank and IMF project that all 27 HIPC countries will become debt sustainable by 2020 because their exports are expected to grow at an average of 7.7 percent per year. However, as we have previously reported, the projected export growth rates are overly optimistic. We estimate that export earnings are more likely to grow at the historical annual average of 3.1 percent per year—less than half the rate the World Bank and IMF project. Under lower, historical export growth rates, countries are likely to have lower export earnings and unsustainable debt levels (see table 2). We estimate the total amount of the potential export earnings shortfall over the 2003 to 2020 projection period to be $215 billion. High export growth rates are unlikely because HIPC countries rely heavily on primary commodities such as coffee, cotton, and copper for much of their export revenue. Historically, the prices of these commodities have fluctuated, often downward, resulting in lower export earnings and worsening debt indicators. A 2003 World Bank report found that the World Bank/IMF growth assumptions had been overly optimistic and recommended more realistic economic forecasts when assessing debt sustainability. Since HIPC countries are assumed to follow their World Bank and IMF reform programs, any export shortfalls are considered to be caused by factors outside their control such as weather and natural disasters, lack of access to foreign markets, or declining commodity prices. Although failure to follow the reform program could result in the reduction or suspension of development assistance, export shortfalls due to outside factors would not be expected to have this result. Therefore, if countries are to achieve economic growth rates consistent with their development goals, donors would need to fund the $215 billion shortfall. Without this additional assistance, countries would grow more slowly, resulting in reduced imports, lower gross domestic product (GDP), and lower government revenue. These conditions could undermine progress toward poverty reduction and other goals. Even if donors make up the export earnings shortfall, more than half of the 27 countries will experience unsustainable debt levels. We estimate that these countries will require $8.5 to $19.8 billion more to achieve debt sustainability and debt-service goals. After examining 40 strategies for providing debt relief, we narrowed our analysis to three specific strategies: (1) switching the minimum percentage of loans to grants for future multilateral development assistance for each country to achieve debt sustainability, (2) paying debt service in excess of 5 percent of government revenue, and (3) combining strategies (1) and (2). We chose these strategies because they maximize the number of countries achieving debt sustainability while minimizing costs to donors. We found that, with this debt relief, as many as 25 countries could become debt sustainable and all countries would achieve a debt service-to-revenue ratio below 5 percent over the entire 18-year projection period (see table 3). In the first strategy, multilateral creditors switch the minimum percentage of loans to grants for each country to achieve debt sustainability in 2020. We estimate that the additional cost of this strategy would be $8.5 billion. The average percentage of loans switched to grants for all countries under this strategy would be 33.5 percent. Twelve countries are projected to be debt sustainable with no further assistance. In addition, 13 countries would achieve sustainability by switching between 2 percent (Benin) and 96 percent (São Tomé and Príncipe) of new loans to grants. A total of 25 countries could be debt sustainable by 2020, although only 2 countries would achieve the 5-percent debt service-to-revenue target over the entire period. The second strategy is aimed at reducing each country’s debt-service burden. Under this strategy, donors would provide assistance to cover annual debt service above 5 percent of government revenue. We estimate that this strategy would cost an additional $12.6 billion to achieve the goal of 5-percent debt service to revenue for all countries throughout the projection period. Under this strategy, no additional countries become debt sustainable other than the 12 that are already projected to be debt sustainable with no further assistance. While this strategy would free significant resources for poverty reduction expenditures, it could provide an incentive for countries to pursue irresponsible borrowing policies. By guaranteeing that no country would have to pay more than 5 percent of its revenue in debt service, this strategy would separate the amount of a country’s borrowing from the amount of its debt repayment. Consequently, it could encourage countries to borrow more than they are normally able to repay, increasing the cost to donors and reducing the resources available for other countries. The third strategy combines strategies 1 and 2 to achieve both debt sustainability and a lower debt-service burden. Under this strategy, multilateral creditors would first switch the minimum percentage of loans to grants to achieve debt sustainability, and then donors would pay debt service in excess of 5 percent of government revenue. We estimate that this strategy would cost an additional $19.8 billion, including $8.5 billion for switching loans to grants, and $11.3 billion for reducing debt service to 5 percent of revenue. Under this strategy, 25 countries would achieve debt sustainability in 2020—that is, 13 countries in addition to the 12 that are projected to be debt sustainable with no further assistance. All 27 countries would reach the 5-percent debt-service goal for the duration of the projection period. However, similar to the debt-service strategy above, this strategy dissociates borrowing from repayment and could encourage irresponsible borrowing policies. If the United States decides to help fund the $375 billion, we estimate that it could cost approximately $52 billion over 18 years, both in bilateral grants and in contributions to multilateral development banks. This amount consists of $24 billion, which represents the U.S. share of the $153 billion in expected development assistance projected by the World Bank and IMF, as well as approximately $28 billion for the increased assistance to the 27 countries. Historically, the United States has been the largest contributor to the World Bank and IaDB, and the second largest contributor to the AfDB, providing between 11 and 50 percent of their funding. The U.S. share of bilateral assistance to the 27 countries we examined has historically been about 12 percent. We also analyzed the impact of fluctuations in export growth on the likelihood of these countries achieving debt sustainability. The export earnings of HIPC countries experience large year-to-year fluctuations due to their heavy reliance on primary commodities, as well as weather extremes, natural disasters, and other factors. We found that the higher a country’s export volatility, the lower its likelihood of achieving debt sustainability. For example, Honduras has low export volatility, resulting in little impact on its debt sustainability. In contrast, Rwanda has very high export volatility, which greatly lowers its probability of achieving debt sustainability. Since volatility in export earnings reduces countries’ likelihood of achieving debt sustainability, it is also likely to further increase donors’ cost as countries may require an even greater than expected level of debt relief to achieve debt sustainability. Mr. Chairman and Members of the Committee, this concludes my prepared statement. I will be happy to answer any questions you may have. For additional information about this testimony, please contact Thomas Melito, Acting Director, International Affairs and Trade, at (202) 512-9601 or Cheryl Goodman, Assistant Director, International Affairs and Trade, at (202) 512-6571. Other individuals who made key contributions to this testimony included Bruce Kutnick, Barbara Shields, R.G. Steinman, Ming Chen, Robert Ball, and Lynn Cothern. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Heavily Indebted Poor Countries (HIPC) Initiative, established in 1996, is a bilateral and multilateral effort to provide debt relief to poor countries to help them achieve economic growth and debt sustainability. Multilateral creditors are having difficulty financing their share of the initiative, even with assistance from donors. Under the existing initiative, many countries are unlikely to achieve their debt relief targets, primarily because their export earnings are likely to be significantly less than projected by the World Bank and International Monetary Fund (IMF). In a recently issued report, GAO assessed (1) the projected multilateral development banks' funding shortfall for the existing initiative and (2) the amount of funding, including development assistance, needed to help countries achieve economic growth and debt relief targets. The Treasury, World Bank, and African Development Bank commented that historical export growth rates are not good predictors of the future because significant structural changes are under way in many countries that could lead to greater growth. We consider these historical rates to be a more realistic gauge of future growth because of these countries' reliance on highly volatile primary commodities and other vulnerabilities such as HIV/AIDS. The three key multilateral development banks we analyzed face a funding shortfall of $7.8 billion in 2003 present value terms, or 54 percent of their total commitment, under the existing HIPC Initiative. The World Bank has the most significant shortfall--$6 billion. The African Development Bank has a gap of about $1.2 billion. Neither has determined how it would close this gap. The Inter-American Development Bank is fully funding its HIPC obligation by reducing its future lending resources to poor countries by $600 million beginning in 2009. We estimate that the cost to the United States, based on its rate of contribution to these banks, could be an additional $1.8 billion. However, the total estimated funding gap is understated because (1) the World Bank does not include costs for four countries for which data are unreliable and (2) all three banks do not include estimates for additional relief that may be required because countries' economies deteriorated after they qualified for debt relief. Even if the $7.8 billion gap is fully financed, we estimate that the 27 countries that have qualified for debt relief may need an additional $375 billion to help them achieve their economic growth and debt relief targets by 2020. This $375 billion consists of $153 billion in expected development assistance, $215 billion to cover lower export earnings, and at least $8 billion in debt relief. Most countries are likely to experience higher debt burdens and lower export earnings than the World Bank and IMF project, leading to an estimated $215 billion shortfall over 18 years. To reach debt targets, we estimate that countries will need between $8 billion and $20 billion, depending on the strategy chosen. Under these strategies, multilateral creditors switch a portion of their loans to grants and/or donors pay countries' debt service that exceeds 5 percent of government revenue. Based on its historical share of donor assistance, the United States may be called upon to contribute about 14 percent of this $375 billion, or approximately $52 billion over 18 years.
The Congress established the EZ/EC program in the Omnibus Budget Reconciliation Act of 1993 (P.L. 103-66, Aug. 10, 1993). Under the act, the communities that wanted to participate in the program had to (1) meet specific criteria for characteristics such as geographic size and poverty rate and (2) prepare a strategic plan for implementing the program. The act also specified that the Secretary of Agriculture could designate up to 3 rural EZs and 30 rural ECs on the basis of their strategic plans. The act also amended title XX of the Social Security Act to authorize the use of the EZ/EC SSBG funds for the program and placed increased authority for funding decision-making with the local EZ/EC governance structures. Historically, the funds from the SSBG program that were allocated to the states could be used only for social service activities, such as programs to assist and feed children. However, under the EZ/EC program, the act expanded the permissible uses of the SSBG’s funds by allowing their use for such activities as purchasing or improving land and facilities or for providing cash payments to individuals for medical care. In addition to the EZ/EC SSBG funds, all of the designated communities are expected to receive several types of federal assistance. The businesses located in the EZs and the ECs are eligible for low-interest loans, resulting from the tax-exempt bonds issued by the state or local governmental unit, to be used to provide facilities and land for the businesses in the communities. In addition, the businesses located within EZs (1) are eligible to receive tax credits on the wages paid to the employees who live and work in the EZ and (2) may deduct higher levels of depreciation expenses. A number of federal departments and agencies also made a commitment to give all EZs and ECs special consideration in the competitions for funds from many other federal programs and to work cooperatively with them in overcoming regulatory impediments. The federal assistance received by the EZs and ECs must be spent in accordance with the communities’ strategic plans. These plans outline how the communities would achieve their goals, including ensuring the active participation of the members of the community, the local private and nonprofit entities, and the federal, state, and local governments. The EZs’ and ECs’ progress in achieving their goals is to be based on the performance benchmarks established by the communities, not on the amount of federal money spent. These benchmarks explain in some detail how the locality intends to achieve its goals. The benchmark document, which becomes a part of the overall plan, includes the specific projects that the EZ or EC will undertake and timelines showing when the projects will be instituted or completed. The benchmark document, which generally looks ahead 2 years, requires continuous review and modifications to accommodate the changes in the community’s needs as well as scheduling problems. These benchmark projects are to serve the EZs and ECs, as well as USDA, as an important management tool and provide the primary basis for evaluating the progress being made. The Department of Health and Human Services (HHS), USDA, and the states play key roles in administering the program. HHS makes grants to the states, and the designated state agency obligates the funds to the EZs and ECs as it receives, reviews, and approves requests from them to draw down funds for a particular benchmark project. In addition, the state must ensure that the requested expenditure is allowable under the state’s standards. USDA, as the lead federal agency for the rural EZ/EC program, is responsible for helping the rural EZs and ECs achieve their goals by evaluating their progress and providing technical assistance. The federal funds invested in the rural EZ/EC program, including loans, grants, and forgone tax revenues, will far exceed the $208 million in EZ/EC SSBG funds allocated to the program. In fact, we estimate that federal funds exceeding $1 billion will be invested in the program over its 10-year life. This estimate includes the EZ/EC SSBG funds, plus an estimated $428 million from tax incentives and about $600 million from USDA’s loan and grant programs. This $1 billion estimate does not include the other significant sources of investments in the program that will be provided by other federal agencies. Estimates from these sources were not available. If the program is successful, some offsetting benefits, such as loan repayments, increased tax revenue, and reduced welfare costs, should occur in the communities. The EZ/EC program has made three tax incentives available to the communities for economic development. The first two, available only to EZs, are (1) the Empowerment Zone Employment Credit, which provides qualified employers with a tax credit of up to $3,000 for each employee who lives and works in the EZ, and (2) the Empowerment Zone Expensing Allowance, which allows a qualified business to take a special depreciation deduction of up to $20,000 (for an annual total of up to $37,500) for equipment purchases each year. The third incentive, available to both the EZs and the ECs, is the Enterprise Zone Facility Bond, which provides up to $3 million in tax-exempt bond financing to qualified businesses for buildings or equipment. Using the data and assumptions from the Internal Revenue Service, we estimate that the cost of the EZ/EC tax incentives in rural areas will be about $428 million over the 10-year period. The EZs’ employment credit will account for $406.5 million of that total; the facility bonds and the expensing allowances will make up the remainder at $4.3 million and $17.2 million, respectively. USDA, HHS, and 13 other federal agencies have agreed to give special consideration to eligible EZ/EC applicants by giving them preferential treatment for funds from the agencies’ existing funding sources over the life of the program. Most federal agencies had not estimated the amount of support they expect to invest in the rural EZs and ECs over the 10-year life of the program. USDA, however, indicated that it alone intends to provide about $246 million to rural EZs and ECs over the first 4 years through existing funding sources such as its Rural Business Enterprise Grant program and the Water and Waste Disposal Loan and Grant programs. If this funding level is maintained over the 10-year life of the program, an assumption that USDA officials consider a reasonable expectation, USDA will provide about $600 million to EZ/EC communities. USDA officials noted that these funds, as well as those from the other agencies that have pledged to provide special consideration to EZ/ECs, represent existing appropriations that would be expended—not new moneys. In addition to the funds provided by federal agencies, the rural EZs and ECs are expected to obtain assistance from state, local, and private sources. Some EZs and ECs are using the EZ/EC SSBG funds as seed money to attract even larger amounts from nonfederal sources, such as foundations. USDA provided data showing that for the 3 EZs and 25 of the ECs, the communities were receiving more than one dollar from their state and local governments and from private and nonprofit organizations for every dollar of EZ/EC SSBG funds received. To the extent that the loans are repaid and that new jobs result in increased tax revenues and reduced welfare payments, the federal investment in the rural EZ/EC program will be offset. The 33 EZs and ECs have established the structures and procedures needed to implement their strategic plans. Nevertheless, the boards of directors for two of the ECs we visited were experiencing problems that could hinder their progress toward completing their benchmark projects. Overall, progress on these projects has varied widely. According to USDA officials, all communities have taken the initial necessary actions to manage and begin implementing their strategic plans, such as establishing a board structure and basic operating principles. These actions had to be formally agreed upon by the community, the state, and USDA in a memorandum of agreement. In order to complete a memorandum of agreement, the communities had to, among other things, establish their benchmarks and develop a budget for the first 2 years of implementation; create the bylaws and/or articles of incorporation for the group, known as the lead entity, that will manage the EZ or EC program; and establish the EZ’s or EC’s board structure. In addition, USDA reviewed the documents prepared by the lead entities to ensure that they had a policy to prevent conflicts of interest and strategies for ensuring broad participation within the community. We visited all three EZs and found that they were generally well-organized to manage the implementation of their strategic plans. For example, while the geographical boundaries of all of the EZs cut across several local government boundaries, such as county lines, they had all developed mechanisms for overcoming the potential problems in having the EZ work with more than one political entity. One EZ that spanned parts of four counties created four subzone boards to overcome the political divisions inherent in its organization. These subzones consider the communities’ proposals for implementing the benchmark projects that originate in their area. The proposals approved at the subzone level are then considered by the EZ’s full board. As of June 1996, nearly 1 year after the memorandum of agreement, the EZ reported some progress toward 45 of the 49 projects serving the subzones. Some of these projects served several subzones, while others served only one. Although the EZs appear to be well organized, two of the ECs that we visited were experiencing problems. For example, at one EC we visited, the board members were in such disagreement with the lead entity over the control of the EC funds that little business has been conducted, and the program has not been moving forward. At another EC, the state agency found that the board members were, among other things, submitting applications for projects that would benefit them financially. USDA requires EZs and ECs to report periodically on the progress they are making toward implementing the benchmark projects. These projects include such things as constructing child care facilities, initiating job training programs, beginning 911 emergency response services, and improving wastewater systems. While USDA has not received complete progress reports from all communities, the progress made by those that have reported varied widely. USDA had sufficient centralized information on 14 communities for us to determine whether (1) progress had been made toward implementing the benchmark projects scheduled to start before December 1996 and (2) the projects that were scheduled to be completed prior to December 1996 had in fact been completed. Progress, by these measures, varied widely among the 14 communities we examined. For example, one community reported that it had made at least some progress toward implementing all of the benchmark projects scheduled to start prior to December 1996 and that one project had been completed. On the other hand, two communities reported that either no progress had been made on projects or that they had not finished any of the projects scheduled for completion prior to December 1996. Overall, 8 of the 14 communities reported that they had not started or completed at least 50 percent of their benchmark projects on time. Appendix I presents information on selected benchmark projects at the eight communities we visited. The rural EZs and ECs have experienced difficulties that have slowed their initial efforts, continue to impede their progress, or both. The difficulties were the short time frame allowed for applying to the program and the misinformation provided by officials at USDA headquarters about the program’s basic operations. While these difficulties have been or are in the process of being resolved, two other issues continue to be of concern. These issues are a lack of clarity about which standards the communities should follow for construction projects when using EZ/EC SSBG funds and the disparity between HHS’ verbal guidance and written guidance to the states on their responsibilities for releasing the EZ/EC SSBG funds to the communities for the EZ/EC program. Officials at each of the rural EZs and ECs we visited commented that the period for preparing an application for the EZ or EC designation was too short for the amount of work required. The communities applying for the program had 5-1/2 months after the President announced the program to submit an application. During that time, they had to achieve grass-roots involvement, gain consensus on the needs and vision of the community, elect a board of directors, produce a strategic plan, and prepare to begin operating. These tasks were particularly difficult to carry out in rural areas that often (1) did not have organized coalitions or the expertise available to articulate a vision and develop a complex strategic plan and (2) are spread out over a large geographical area, which makes putting together all parts of the application more difficult. While the communities met their application deadlines, some officials believe that a longer period to organize would have allowed them to better galvanize the public’s support and involvement and that, in some instances, they would have been better able to identify their needs and establish appropriate goals. USDA and HHS officials acknowledged that the communities faced short time frames. Some USDA officials stated that the short time frames required that the rural communities act quickly both to generate local involvement and to create the vision and strategic plan required to meet the application deadline. They noted that the federal agencies involved faced organizational pressures as well. The EZ/EC program’s timetable required the federal agencies to develop their coordination strategy, perform detailed planning, hire and train staff, and begin operating the program within the 16 months between the passage of the legislation and the designation of the EZs and the ECs. HHS and USDA officials generally agreed that, should a second round of EZs and ECs be authorized, it may be beneficial to allow the communities a somewhat longer time to apply in order to facilitate broader public involvement and a fuller consideration of the vision and the steps needed to accomplish it. Furthermore, some officials said that, if a second round is authorized, the federal government may need to provide more guidance on how to prepare the application documents to ensure a somewhat greater uniformity than they had experienced in the current program. At seven of the eight rural EZs and ECs we visited, officials noted that erroneous information, provided primarily by officials from USDA headquarters at meetings around the nation, caused misunderstandings about the operations of the EZ/EC program. At some of the meetings, the federal headquarters officials said that the EZs and ECs would receive the EZ/EC SSBG funds directly. Two of the EZs expected to receive the total amount of the EZ/EC SSBG funds—$40 million—in two consecutive annual payments, while some ECs believed that they would receive their total payment of about $2.9 million shortly after they were selected. In fact, as we discussed earlier, the communities are receiving their funding incrementally through the state agency as needed to pay for benchmark activities. Furthermore, some communities were told, incorrectly, that they did not have to get approval from any federal or state entity to use the funds for projects that were consistent with the strategic plan. The incorrect information provided by USDA officials caused difficulties for several state agencies and rural communities. For example, one EC had to revise its plan when it learned how the funds were actually to be distributed. The community had planned to obtain the lump-sum payment, put it into an interest-bearing account, and use the interest, which would have been considerable, to fund some part of certain projects. Since no lump sum was made available, the EC revamped its plan to obtain alternative sources of funding for some of its projects. Several of the EZs and ECs that we visited have encountered some difficulty in sorting out which federal standards apply to certain types of projects financed with EZ/EC SSBG funds. Those funds can be used for construction projects, such as water and sewer proposals, if the projects are related to one of the program’s goals, such as providing training to disadvantaged youth. However, the act did not specify any standards for these new allowable uses. As a result, the communities have been deciding for themselves which construction standards they will follow. The communities have taken different approaches to address this difficulty. For example, officials at one EZ seeking to build a water system told us that they were unable to get guidance from HHS and decided to follow the environmental regulations that they considered most appropriate—those governing the use of the Department of Housing and Urban Development’s Community Development Block Grant program—for that project and for any other project that might involve environmental issues. Faced with a similar dilemma, another EZ took a different approach, deciding to follow the environmental regulations associated with the primary funding source for a given project. Some rural EZ officials seeking clarification on this issue contacted HHS, which oversees the EZ/EC SSBG funds. According to these officials, HHS did not indicate what construction standards should be used. Consequently, the community officials have used their best judgment on how to proceed with specific projects and activities. Some EZ officials added that they are concerned that they may be legally liable if they choose to follow an incorrect standard and may have to replace such things as improperly sized water or sewer pipes, thereby incurring considerable costs and causing disruption. According to the HHS regulations and the Terms and Conditions of the EZ/EC program, the financial standards that the states are to apply in administering the EZ/EC SSBG funds are the standards that they use for expending their own state funds. However, officials in three of the states we visited told us that HHS program officials had verbally appealed to them to be flexible in applying their standards. This conflicting guidance has led to disagreements between some of the state agencies and the EZs and ECs over who has oversight responsibility. For example, when one EC requested a drawdown of funds for expenses that included liquor, the state agency disallowed payment for the liquor. The state agency argued that Office of Management and Budget Circular A-87, the standard adopted by the state, did not allow expenditures of federal funds for liquor. State agency officials told us that HHS verbally asked the state agency to be flexible and allow the expenditure and advised them not to worry about having to repay the expenditure at a later date. The state agency officials told us that they wanted to cooperate with HHS in the EZ/EC program, but they also wanted the state to comply with its own regulations, including Circular A-87. The state ultimately disallowed the expenditure. In another state, the EZ submitted 22 project proposals to the state agency for its review prior to a formal request to draw down funds. According to state agency officials, the proposals did not meet the state’s fiscal standards in that most of the proposals had budgets that were inconsistent with reasonable and prudent business practices. These budgets reportedly included such items as salaries and fringe benefits that were above the industry’s average. State officials told us that HHS had appealed to them on numerous occasions to be more flexible in their reviews of the EZ’s proposals. As of February 3, 1997, the state had approved no funds for these 22 proposals. Under the EZ/EC program, USDA’s Office of Community Development is responsible for overseeing the participating rural EZs and ECs. It is to carry out its responsibilities through site visits by USDA state coordinators and USDA headquarters’ reviews of the progress reports periodically submitted by the EZs and ECs. However, USDA cannot adequately fulfill its oversight responsibilities because it has not received complete progress reports from all of the USDA state coordinators or the EZs and ECs. USDA’s EZ/EC state coordinators do not provide systematic reporting on the progress of rural EZs and ECs. Among other things, the state coordinators are responsible for reviewing all benchmark changes to ensure the communities’ participation and conformance with the strategic plan. They are also involved with the initial approval of these changes. Most of these coordinators, who were chosen from the existing staff at the USDA state offices, have had little experience in overseeing the broad range of a community’s economic and social development projects and have not received training in how to monitor and report on the communities’ progress. USDA officials agreed that their EZ/EC state coordinators needed training but told us that funding constraints prevented them from developing and offering oversight training. Oversight is further hampered because the EZs and ECs are not consistently reporting their progress to USDA. USDA’s regulations require the EZs and ECs to report their progress at least annually. However, as noted earlier, the information provided is inadequate. Only 14 of the 33 communities had provided systematic information on their progress as of January 1, 1997. As a new approach to providing development assistance to rural areas, the EZ/EC program has faced a number of problems, several of which were associated with the program’s start-up and are no longer of immediate concern. However, some issues related to the guidance for the program continue to cause confusion among the program’s participants and could hamper the program’s progress. These issues include (1) the absence of written guidance defining the standards to be followed when using EZ/EC SSBG funds for construction-related projects and (2) conflicting guidance about the fiduciary responsibilities that the participating states should exercise for ensuring that the EZ/EC SSBG funds are spent in accordance with the appropriate financial standards. Additionally, in view of the significant level of federal funds supporting rural EZs and ECs, it is important that USDA have a sufficient capability to oversee the progress that these communities are making in implementing the program. USDA’s current oversight system, however, provides only piecemeal information on the EZs’ and ECs’ progress. As a result, USDA lacks the systematic information necessary for overseeing the program, including identifying problems and helping the communities to develop solutions. To reduce confusion about the program’s guidance on the uses of and financial controls over the EZ/EC SSBG funds, the Secretary of Health and Human Services should direct the Assistant Secretary for Planning and Evaluation to (1) clarify which construction-related standards the EZs and ECs should follow in using the EZ/EC SSBG funds and (2) eliminate the conflicts between the Department’s verbal and written guidance on the states’ fiduciary responsibility for the EZ/EC SSBG funds. To improve USDA’s oversight of the EZ/EC program, the Secretary of Agriculture should instruct the Director of the Office of Community Development to upgrade the Office’s monitoring system so that it can routinely provide the necessary information for assessing the progress of the EZs and ECs in implementing the program. This action could be accomplished by more strictly enforcing the EZs’ and ECs’ current self-reporting requirements and by developing more systematic reporting requirements for USDA’s EZ/EC state coordinators. We provided copies of the draft report for review and comment to USDA and HHS. These agencies’ written comments and our responses appear in appendixes III and IV, respectively. In commenting on the draft, USDA noted that it concurred with our conclusions and was implementing changes in response to our recommendation. USDA also made a number of comments about the difficulties of beginning a new kind of program, and we have revised our report, where appropriate, to reflect these comments. In particular, USDA concurred with our finding that the time frames for communities to apply for EZ/EC status were short and that should a second round of EZs and ECs be authorized, the Department would expect to be more fully staffed and able to better assist the applicants, both through direct guidance and with a more structured application format. USDA also made a number of comments about our analysis of the federal investment in the EZ/EC program and provided additional information about the Department’s future anticipated financial support of rural EZs and ECs. We have included this additional information and revised our estimates in consultation with USDA program officials. In their comments on the draft report, HHS officials stated that they will work to clarify several issues raised in the report, including the applicability of construction standards and federal fiscal standards to the program. HHS officials also suggested a number of changes to the report that would, among other things, clarify local, state, and federal roles in the EZ/EC program. Furthermore, these officials emphasized the need for flexibility in administering the program so that it can achieve its full potential. We incorporated these comments where appropriate. We also sent the detailed information on each EZ and EC we visited (as presented in app. I) to the cognizant local officials for their review and comment. We made several technical changes in response to the comments we received. In addition, we sent the sections of the report describing the problems resulting from the conflict between HHS’ written and verbal guidance on financial standards to the cognizant state government officials for review and comment; these officials concurred in our presentation of the issues discussed in the report. We conducted our review from June 1996 through February 1997 in accordance with generally accepted government auditing standards. Our scope and methodology are discussed in more detail in appendix II. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days from the date of this letter. At that time, we will send copies of this report to the House Committee on Agriculture, other interested congressional committees, the Secretaries of Agriculture and Health and Human Services, the Director of the Office of Management and Budget, and other interested parties. We will also make copies available upon request. If you have any questions about this report, please call me at (202) 512-5138. Major contributors to this report are listed in appendix V. The Kentucky Highlands Investment Corporation is the lead entity for the Empowerment Zone (EZ); this Corporation was created in 1968 to foster economic development in the area. A steering committee consisting of representatives from each of the three subzone areas was established; an effort was made to ensure that the committee was balanced in terms of geographical representation, income, and expertise. The steering committee also includes representatives from the Board of Directors of the Kentucky Highlands Investment Corporation, local and state governments, economic development agencies, universities, and local residents. The steering committee has the overall responsibility for implementing the strategic plan and for providing guidance throughout the implementation of the plan. The Empowerment Zone/Enterprise Community Social Services Block Grant (EZ/EC SSBG) funds for the EZ pass through the Kentucky Department of Financial Incentives to the Kentucky Highlands Empowerment Zone for its use. As of December 31, 1996, the EZ had obtained about $7.4 million of its allocated EZ/EC SSBG funds for use in its projects. The EZ’s strategic plan sets forth a four-pronged approach to revitalizing the communities included within the Zone: developing economic opportunity; promoting tourism; building infrastructure; and enhancing the quality of life. The EZ has 24 benchmarks. The projects include the following: establishing a development venture capital fund to invest in businesses located within the EZ; starting 150 home-based businesses in each of the three subzone areas, including training the home keyers to perform data entry work and assisting them in purchasing the computer equipment; building and equipping four rural fire stations within the EZ; and expanding a county library and increasing its telecommunications capacity. The Mid-Delta Empowerment Zone Alliance originated in January 1994 as a collaborative arrangement between The Delta Foundation and The Delta Council, representing poor minority and more affluent white interests, respectively. These two organizations came together to develop a strategic plan to benefit all citizens of the area, to apply for the EZ/EC program, and to establish the Alliance. The Mid-Delta Empowerment Zone Alliance Commission was formed in April 1994; it includes representatives from—among other elements—businesses, churches, colleges, community groups, low-income groups, and public schools from all areas of the zone. The Commission reviews and votes on all proposals for projects to address the benchmark projects on the strategic plan. The EZ/EC SSBG funds pass through the Mississippi Department of Human Services to the Mid-Delta Empowerment Zone Alliance for its use. As of December 31, 1996, the EZ had obtained about $221,000 of its allocated EZ/EC SSBG funds for use in its projects. The EZ’s strategic plan focuses on three themes: building community in the Mississippi Delta, increasing economic opportunity in Mississippi Delta communities, and sustaining community and economic development in Mississippi Delta communities. The EZ has 41 benchmarks. Specific projects for the EZ include the following: expanding and strengthening businesses and industries by providing assistance in accessing capital, business and technical assistance, and marketing; improving the quality and accessibility of health care by seeking to increase the number of doctors serving the Mid-Delta region; improving race relations by creating a race relations council; and promoting community beautification through the creation of a recycling program. The lead entity is the Rio Grande Valley Empowerment Zone Corporation, which was created to manage the EZ and implement the strategic plan; the corporation is headed by a chief executive officer who manages a small staff. The Corporation reports to a 19-member board of directors, the majority of whom were appointed by the county judges of the four counties. The board members include two directors from each of the four counties’ subzone advisory committees. Each of the four counties appoints a subzone advisory committee for advocating matters within the subzone; two of the four subzones have also allocated funds to administration to employ a full-time professional subzone manager to oversee the day-to-day operations within the subzone. The EZ/EC SSBG funds pass through the Texas Health and Human Services Commission to the Rio Grande Valley Empowerment Zone for its use. As of December 31, 1996, the EZ had obtained about $4.9 million of the EZ/EC SSBG funds for use in its projects. The EZ’s strategic plan focuses on 10 objectives, including (1) improving the quality of life to discourage outmigration from the area, (2) providing programs for literacy and living skills, (3) initiating regional business development, and (4) improving the availability of housing by providing new housing and increasing access to the existing housing. The EZ has 49 benchmarks. Specific projects include the following: establishing a small business incubator; developing a historical district for small businesses; implementing a flood control project; and establishing a community elder care/youth recreation center. As of January 1997, the EC was in the process of reincorporating and negotiating revisions to the memorandum of agreement. At the time of our visit, the EC was governed by a board drawn from the 10 Census tracts comprising the EC. In addition, each of the 10 subzone areas had established local EC boards to oversee projects within the subzone. The Central Savannah River Area Regional Development Center was the lead entity for the EC. The EZ/EC SSBG funds pass through the Georgia Department of Community Affairs to the Central Savannah River Area Enterprise Community for its use. As of December 31, 1996, the EC had obtained about $429,000 of the EZ/EC SSBG funds for use in its projects. The EC’s strategic plan emphasizes seven goals: (1) agriculture, business, and economic development; (2) human development, health, and public safety; (3) education; (4) housing; (5) arts, recreation, and cultural tourism; (6) public infrastructure; and (7) community organizing, coalitions, and partnerships. The EC has 12 benchmarks. Specific projects include the following: operating general education degree classes for the EC’s residents; establishing family service centers providing such services as youth recreation and leadership classes and adult literacy classes; and training community outreach organizers to foster community involvement. The Crisp/Dooly Partnership, Inc., is the lead entity and is responsible for implementing the enterprise community’s strategic plan. The Crisp/Dooly Partnership reports to a 32-member board consisting of representatives of Crisp and Dooly counties, 16 from each county, and are drawn from such organizations as economic development councils, chambers of commerce, and boards of education as well as from law enforcement groups, churches, and low-income residents. The EZ/EC SSBG funds pass through the Georgia Department of Community Affairs to the Crisp/Dooly Joint Development Authority, which is fiscally responsible for the EZ/EC SSBG funds. The Crisp/Dooly Joint Development Authority consists of a separate board of eight directors who are appointed by the Crisp and Dooly county governments and who, in turn, release the funds to the Crisp/Dooly Partnership, Inc., for the implementation of the strategic plan. As of December 31, 1996, the EC had obtained about $215,000 of its EZ/EC SSBG funds program for use in its projects. The EC’s strategic plan emphasizes five goals: (1) social and economic empowerment for developing innovative community services, (2) development of Crisp/Dooly counties’ economic partnership to coordinate economic development initiatives, (3) human and community development through improved community relations, (4) improving education, and (5) improving the quality of life and of the environment. The EC has 37 benchmarks. Specific projects include the following: building a postsecondary vocational-technical center; establishing an adult literacy program; and developing a rural transportation system. The lead entity for the EC is the City of Lock Haven City Council; a full-time Federal Enterprise Coordinator was hired by the city to manage the day-to-day operations of the EC. A Federal Enterprise Committee oversees the EC; several subcommittees, such as the economic development subcommittee and health and human services subcommittee report to the EC Committee. The EZ/EC SSBG funds pass through the Pennsylvania Department of Public Welfare to the City of Lock Haven Enterprise Community for its use. As of December 31, 1996, the EC had obtained about $456,000 of the EZ/EC SSBG funds for use in its projects. The EC’s strategic plan states that its main goal is to promote economic development through job creation and retention. The EC has 36 benchmarks. Specific projects include the following: establishing a micro-revolving loan fund to assist small and start-up establishing a partnership among the city, Clinton County, local banks and lending institutions, local housing nonprofit organizations, and housing developers to expand the supply of affordable housing for elderly and low-income residents; and assisting in the renovation of a job training facility and in funding job training workshops in such subjects as computer skills. When the EZ/EC program was announced in 1994, a group of 22 citizens from this very poor area of Mississippi came together to discuss applying for designation as an EZ or EC. With the assistance of an established local Planning and Development District, they applied for and were subsequently designated as an EC. In November 1995, this group was incorporated as the North Delta Mississippi Enterprise Community Development Corporation. A 22-member board, consisting of the citizens who had been involved in the preparation of the strategic plan, was installed and authorized to determine the major personnel, fiscal, and program policies and the overall program plans and priorities and to give final approval to all corporate initiatives. The EZ/EC SSBG funds pass through the Mississippi Department of Human Services to the North Delta Enterprise Community for its use. As of December 31, 1996, the EC had obtained about $25,000 of the EZ/EC SSBG funds for use in its projects. The EC’s strategic plan focuses on four areas: (1) economic and community development, (2) empowerment through the ability of the EC to solve its own problems and create its own opportunities, (3) human services and physical development, and (4) natural resources and environmental concerns. The EC has 16 benchmarks. Specific projects include the following: developing parks, recommending state legislation for a tax-incentive program, providing small business training, and increasing the availability of safe and affordable housing. The City Council of Watsonville is the lead entity for the EC; the deputy city manager has the day-to-day responsibilities for the EC. An advisory steering committee represents the residents of the EC; however, the city council has decision-making responsibility. The EZ/EC SSBG funds pass through the California Department of Social Services to the City of Watsonville Enterprise Community for its use. As of December 31, 1996, the EC had obtained about $250,000 of the EZ/EC SSBG funds for use in its projects. The main emphasis of the EC’s strategic plan is youth development. The EC has 15 benchmarks. Specific projects include the following: establishing youth job training, including teaching basic job-seeking expanding and renovating recreation facilities for at-risk youth in several impoverished parts of the city; building and operating a small business retail incubator in conjunction with the new transit center; and improving the downtown area by refurbishing retail businesses’ facades. To estimate the cost of the rural EZ/EC program, we reviewed information on the resources available to the EZ/EC program from the U.S. Department of Agriculture (USDA), the Department of Health and Human Services (HHS), and 13 other federal agencies. We also obtained tax-incentive information from the Internal Revenue Service and spoke with officials in the Office of Tax Assessment. To review the status of the EZ/EC program’s implementation and identify the difficulties that communities have encountered, we talked with officials and obtained information at all 3 rural EZs, 5 of the 30 rural ECs, six states, and the two principal agencies, USDA and HHS. We selected ECs that are located in the same state as the EZs and added ECs from three other states to provide geographic distribution. During our visits to these communities, we visited selected projects to discuss the EZ/EC program with the individuals most directly involved at the local level. We also reviewed the strategic plans, benchmarks, status reports, and funding documents for the eight EZ/ECs visited, as well as information maintained by USDA on the remaining 25 rural ECs. We also examined the progress reports available at USDA’s headquarters. To evaluate USDA’s oversight of the EZ/EC program, we reviewed the applicable regulations, discussed the roles of the USDA state coordinators with USDA headquarters officials, examined USDA’s central files on progress reports, and requested reports from USDA’s State coordinators. We also interviewed the USDA state coordinators in the six states we visited. We performed our work in accordance with generally accepted government auditing standards from June 1996 through February 1997. The following are GAO’s comments on the U.S. Department of Agriculture’s letter dated March 10, 1997. 1. We agree with USDA that the EZ/EC program is different from traditional federal programs and that the program challenges communities to engage in activities with which they have little experience. We have modified our report accordingly. 2. While we recognize that this program required a steep learning curve for all participants, we still believe that USDA has had a sufficient amount of time—42 months after the passage of the authorizing legislation—to have put into place systematic oversight of the program. 3. We believe the commitment and actions outlined by USDA to address the recommendations in our report go a long way toward correcting the problems we noted concerning the need to upgrade the Office of Community Development’s monitoring system. 4. We have modified our report to include USDA’s concurrence that all parties agreed that the initial time frames permitting communities to apply for EZ or EC status were inadequate. 5. We have deleted the references to supplantation in the report. A discussion of supplantation was not needed to illustrate our concern about the possible conflicts between verbal instructions and written regulations on administering EZ/EC SSBG funds. 6. We have added statements about USDA headquarters staff members’ visits to rural EZs and ECs and about the responsibilities of the state coordinators. 7. We agree with USDA that situations such as the one portrayed in our report will arise. This is precisely the reason why we believe that USDA needs a strong oversight capability, i.e., one that will identify problems early on and help focus the assistance, training, and consultation needed to help the EZs and ECs resolve the problems. 8. We have clarified the language to recognize that the report’s estimates of total EZ/EC funding includes funds that would not represent additional appropriations but, rather, a special targeting of appropriations that would have been made in any case in order to support rural communities under the various programs of the participating agencies. We have also added language to the report which notes that there may be some offsetting benefits to the government, such as increased tax revenues, resulting from the creation of new jobs. Finally, USDA’s statement that funds are approximately $30 million less than GAO’s estimate was in response to a prior GAO estimate. USDA agrees with the estimates contained in this report. The following are GAO’s comments on the Department of Health and Human Services’ letter dated March 17, 1997. 1. We have revised the report to consistently use the term “special consideration.” 2. We have revised the report to clearly note that EZ/EC SSBG funds are to be spent in accordance with the communities’ strategic plans. 3. We have revised the report to highlight the authority that local governance structures were given under the amended SSBG program. 4. We do not believe that the additional information on HHS’ guidance was necessary to understand the report’s findings. Therefore, we did not include it as an appendix. 5. We have added more detail on USDA’s role in the EZ/EC program. 6. We have revised the report to recognize that HHS officials believe that the short application period created planning and organizational difficulties. 7. We have revised the report as suggested to note that the 1993 act provided the authority to use the EZ/EC SSBG funds for a wider variety of purposes than was previously permitted. 8. Our report is not intended to question the ability of local officials to make decisions. Rather, it merely points out that local officials told us that they need additional guidance on what standards to apply to the EZ/EC SSBG funds. 9. We have deleted the references to supplantation in the report. A discussion of supplantation was not needed to illustrate our concern about the possible conflicts between verbal instructions and written regulations on administering EZ/EC SSBG funds. Robert C. Summers, Assistant Director John K. Boyle, Project Leader Clifford J. Diehl Carol Herrnstadt Shulman Patricia A. Yorkman The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed selected aspects of the Department of Agriculture's (USDA) rural Empowerment Zone/Enterprise Community (EZ/EC) Program, focusing on: (1) the federal funding levels of the rural EZ/EC program over the 10-year life of the program; (2) the status of the implementation of the program; (3) the difficulties that the communities have encountered in implementing their plans; and (4) USDA's oversight of the program. GAO noted that: (1) it estimates that federal funding for the rural EZ/EC program will total more than $1 billion over the 10-year life of the program; (2) this amount includes the $208 million in EZ/EC funds from the Social Services Block Grant (SSBG) program and an estimated $428 million from tax incentives; (3) estimates for direct funding from federal, state, and local programs as well as private sources are not generally available; (4) however, one federal agency, USDA, reports that it plans to provide about $246 million to the rural EZs and ECs over the first 4 years alone and that its funding for the 10-year life of the program could reasonably be expected to reach $600 million; (5) the status of the communities' implementation of the EZ/EC program varies; (6) all 33 rural EZs and ECs have established the basic organizational structures and procedures necessary to implement their strategic plans; (7) in terms of implementing the projects contained in these plans, such as day care services, emergency 911 services, and job training, some communities have made considerable progress and some have made very little; (8) the rural EZs and ECs have experienced a number of difficulties that have slowed their initial efforts, continue to impede their progress, or both; (9) these difficulties include the short time frames provided for applying to the program and the initial misinformation provided by officials at USDA headquarters about the program's basic operations; (10) while some of these difficulties have been or are in the process of being resolved, two issues continue to be of concern; (11) these issues are a lack of clarity about which federal regulations are applicable to the construction projects funded by EZ/EC Social Services Block Grants, and the conflict between the verbal guidance and the written guidance that the Department of Health and Human Services (HHS) has provided to the states on their responsibilities for ensuring that funds are reasonably and prudently spent; (12) under the EZ/EC program, USDA is responsible for overseeing the progress of the rural EZs and ECs and USDA is to accomplish this oversight through reviews of the periodic reports submitted by the EZs and ECs and by site visits conducted by USDA field personnel, known as EZ/EC state coordinators; (13) however, USDA cannot adequately fulfill its oversight responsibilities because the EZs, the ECs, and the EZ/EC state coordinators do not provide USDA with complete and systematic progress reports; and (14) consequently, USDA lacks the basic management information for identifying problem areas.
Most Navy cardholders properly used their travel cards and paid amounts owed to Bank of America in a timely manner. However, as shown in figure 1, the Navy’s average delinquency rate was nearly identical to the Army’s, which, as we have previously testified, is the highest delinquency rate in the government. The Navy’s quarterly delinquency rate fluctuated from 10 to 18 percent, and on average was about 6 percentage points higher than that of federal civilian agencies. As of March 31, 2002, over 8,400 Navy cardholders had $6 million in delinquent debt. We also found substantial charge-offs of Navy travel card accounts. Since the inception of the travel charge card task order between DOD and Bank of America on November 30, 1998, Bank of America has charged off over 13,800 Navy travel card accounts with $16.6 million of bad debt. Recent task order modifications allow Bank of America to institute a salary offset against DOD military personnel members whose travel card accounts were previously charged off or are more than 120 days past due. As a result, as of July 31, 2002, Bank of America had recovered $5.2 million in Navy government travel card bad debts. The high level of delinquencies and charge-offs have also cost the Navy millions of dollars in lost rebates, higher fees, and substantial resources spent pursuing and collecting past due accounts. For example, we estimate that in fiscal year 2001, delinquencies and charge-offs cost the Navy $1.5 million in lost rebates, and will cost about $1.3 million in increased automated teller machines (ATM) fees annually. As shown in figure 2, the travel cardholder’s rank or grade (and associated pay) is a strong predictor of delinquency problems. We found that the Navy’s overall delinquency and charge-off problems are primarily associated with young, low- and mid-level enlisted military personnel with basic pay levels ranging from $12,000 to $27,000. According to Navy officials, low- and mid-level enlisted military personnel comprise the bulk of the operational forces and are generally young, often deployed, and have limited financial experience and resources. It is therefore not surprising to see a higher level of outstanding balances and delinquent amounts due for these personnel. Figure 2 also shows that, in contrast, the delinquency rate for civilians employed by the Navy is substantially lower. As of September 30, 2001, the delinquency rate of low- and mid-level enlisted personnel was almost 22 percent, compared to a Navy civilian rate of slightly more than 5 percent. This rate is comparable to the non-DOD civilian delinquency rate of 5 percent. The case study sites we audited exhibited this pattern. For example, at Camp Lejeune, a principal training location for Marine air and ground forces, over one-half of the cardholders are enlisted personnel. Representative of the Navy’s higher delinquency rate, Camp Lejeune’s quarterly delinquency rate for the 18-month period ending March 31, 2002, averaged over 15 percent and was close to 10 percent as of March 31, 2002. In contrast, at Puget Sound Navy Shipyard, where the mission is to repair and modernize Navy ships, civilian personnel earning more than $38,000 a year made up 84 percent of total government travel card holders and accounted for 86 percent of total fiscal year 2001 travel card transactions. This site’s delinquency rate had declined to below 5 percent as of March 31, 2002. In combination with these demographic factors, a weak overall control environment, flawed policies and procedures, and a lack of adherence to valid policies and procedures contributed to the significant delinquencies and charge-offs. Further discussion of these breakdowns is provided later in this testimony. Our work identified numerous instances of potentially fraudulent and abusive activity related to the travel card. During fiscal year 2001 and the first 6 months of fiscal year 2002, over 5,100 Navy employees wrote at least one nonsufficient fund (NSF), or “bounced” check, to Bank of America as payment for their travel card bills. Of these, over 250 wrote 3 or more NSF checks, a potentially fraudulent act. Appendix III provides a table summarizing 10 examples, along with more detailed descriptions of selected cases in which cardholders might have committed fraud by writing 3 or more NSF checks to Bank of America. These 10 accounts were subsequently charged-off or placed in salary offset or voluntary fixed payment agreements with Bank of America. We also found that the government cards were used for numerous abusive transactions that were clearly not for the purpose of government travel. As discussed further in appendix I, we used data mining tools to identify transactions we believed to be potentially fraudulent or abusive based upon the nature, amount, merchant, and other identifying characteristics of the transaction. Through this procedure, we identified thousands of suspect transactions. Table 1 illustrates a few of the types of abusive transactions and the amounts charged to the government travel card in fiscal year 2001 and the first 6 months of fiscal year 2002 that were not for valid government travel. Government travel cards were used for purchases in categories as diverse as legalized prostitution services, jewelry, gentlemen’s clubs, gambling, cruises, and tickets to sporting and other events. The number of instances and amounts shown includes both cases in which the cardholders paid the bills and instances in which they did not pay the bills. We found that 50 cardholders used their government travel card to purchase over $13,000 in prostitution services from two legalized brothels in Nevada. Charges were processed by these establishments’ merchant bank, and authorized by Bank of America, in part because a control afforded by the merchant category code (MCC), which identifies the nature of the transactions and is used by DOD and other agencies to block improper purchases, was circumvented by the establishments. In these cases, the transactions were coded to appear as restaurant and dining or bar charges. For example, the merchant James Fine Dining, which actually operates as a brothel known as Salt Wells Villa, characterizes its services as restaurant charges, which are allowable and not blocked by the MCC control. According to one assistant manager at the establishment, this is done to protect the confidentiality of its customers. Additionally, the account balances for 11 of the 50 cardholders purchasing services from these establishments were later charged off or put into salary offset. For example, one sailor, an E-2 seaman apprentice, charged over $2,200 at this brothel during a 30-day period. The sailor separated from the Navy, and his account balance of more than $3,600 was eventually charged off. We also found instances of abusive travel card activity where Navy cardholders used their cards at establishments such as gentlemen’s clubs, which provide adult entertainment. Further, these clubs were used to convert the travel card to cash by supplying cardholders with actual cash or “club cash” for a 10 percent fee. For example, we found that an E-5 second class petty officer circumvented ATM cash withdrawal limits by charging, in a single transaction, $2,420 to the government travel card and receiving $2,200 in cash. Subsequently, the club received payment from Bank of America for a $2,420 restaurant charge. Another cardholder, an E- 7 chief petty officer, obtained more than $7,000 in cash from these establishments. For fiscal year 2001 and through March 2002, 137 Navy cardholders made charges totaling almost $29,000 at these establishments. These transactions represented abusive use of the travel cards that were clearly unrelated to official government travel. There should be no misunderstanding by Navy personnel that personal use of the card is not permitted. In fact, the standard government travel card used by most Navy personnel is clearly marked “For Official Government Travel Only” on the face of the card. Additionally, upon receipt of their travel cards, all Navy cardholders are required to sign a statement of understanding that the card is to be used only for authorized official government travel expenses. However, as part of our statistical sampling results at the three sites we audited, we estimated that personal use of the government travel card ranged from almost 7 percent of fiscal year 2001 transactions at one site to over 26 percent at another site. As shown in appendix V, cardholders who abused the card but paid the bill also used the government travel cards for the same transaction types discussed in table 1. Personal use of the card also increases the risk of charge-offs related to abusive purchases, which are costly to the government and the taxpayer. Our work found that charged-off accounts included both those of (1) cardholders who were reimbursed by the Navy for official travel expenses but failed to pay Bank of America for the related charges, thus pocketing the reimbursement, and (2) those who used their travel cards for personal purchases for which they did not pay Bank of America. Appendix IV provides a summary table and supporting narrative describing examples of abusive travel card activity where the account was charged off or placed in salary offset or voluntary fixed payment agreements with Bank of America. Furthermore, as detailed by the 10 examples in appendix V, we also found instances in which cardholders used their travel cards for personal purposes, but paid their travel card bills when they became due. For example, an E-5 second class petty officer reservist, whose civilian job was with the U.S. Postal Service, admitted making phony charges of over $7,200 to operate his own limousine service. In these transactions, the sailor used the travel card to pay for bogus services from his own limousine company during the first few days of the card statement cycle. By the second day after the charges were posted, Bank of America would have deposited funds—available for the business’ immediate use—into the limousine business’ bank account. Then, just before the travel card bill became due, the limousine business credited the charge back to the sailor’s government travel card and repaid the funds to Bank of America. This series of transactions had no impact on the travel card balance, yet allowed the business to have an interest-free loan for a period. This pattern was continued over several account cycles. Navy officials were unaware of these transactions until we brought them to their attention and are currently considering what, if any, action should be taken against the cardholder. We did not always find documented evidence of disciplinary actions taken by Navy commanders and supervisors against cardholders who wrote NSF checks or had their accounts charged off or placed in salary offset. Of the 57 cardholders fitting these categories that we selected through data mining, we did not find any documented evidence that 37 had been disciplined. For example, a lieutenant commander (O-4) with the Naval Air Reserve used his travel card for personal purchases in California and frequent personal trips to Mexico. The individual did not pay his account when due and was placed in salary offset in October 2001. Although the agency program coordinator (APC) responsible for program oversight had apprised management of this officer’s abuse of the travel card, and had initiated actions to take away the cardholder’s security clearance, management had not taken any administrative action against this cardholder. In addition, of the 10 individuals who abused the card but paid their bills, only 1 was disciplined. Appendixes III, IV, and V provide further details of the extent of disciplinary actions taken against some of the cardholders we examined. In addition, we found that 27 of these same 57 travel cardholders we examined whose accounts were charged off or placed in salary offset as of March 31, 2002, still had active secret or top-secret security clearances in August 2002. Some of the Navy personnel holding security clearances who have had difficulty paying their travel card bills may present security risks to the Navy. DOD rules provide that an individual’s finances are one of the factors to be considered in whether an individual should be entrusted with a security clearance. However, we found that Navy security officials were unaware of these financial issues and consequently could not consider their potential effect on whether these individuals should continue to receive a security clearance. We have referred cases identified from our audit to the U.S. Navy Central Adjudication Facility (commonly referred to as Navy CAF) for its continued investigation. For fiscal year 2001, we identified significant breakdowns in key internal controls over individually billed travel cards. The breakdowns stemmed from a weak overall control environment, a lack of focus on oversight and management of the travel card program, and a lack of adherence to valid policies and procedures. These breakdowns contributed to the significant delinquencies and charge-offs of Navy employee account balances and potentially fraudulent and abusive activity related to the travel card. In contrast, one Navy unit we audited with a low average delinquency rate (4 percent) attributed its relative success to constant monitoring of delinquencies and to some monitoring of inappropriate travel card use. We found that in fiscal year 2001, management at the three case study locations we audited focused primarily on reducing delinquencies. In general, management placed little emphasis on controls designed to prevent, or provide for early detection of, travel card misuse. In addition, we identified two key overall control environment weaknesses: (1) the lack of clear, sufficiently detailed Navy travel card policies and procedures and (2) limited internal travel card audit and program oversight. First, the units we audited used DOD’s travel management regulations (DOD Financial Management Regulation, volume 9, chapter 3) as the primary source of policy guidance for management of Navy’s travel card program. In many areas, the existing guidance was not sufficiently detailed to provide clear, consistent travel management procedures to be followed. Second, as recognized in the DOD Inspector General’s March 2002 summary report on the DOD travel card program, “ecause of its dollar magnitude and mandated use, the DOD travel card program requires continued management emphasis, oversight, and improvement by the DOD. Independent internal audits should continue to be an integral component of management controls.” However, no internal review report had been issued since fiscal year 1999 concerning the Navy’s travel card program. We found that this overall weak control environment contributed to design flaws and weaknesses in a number of management control areas needed for an effective travel card program. For example, many problems we identified were the result of ineffective controls over issuance of travel cards. Although DOD’s policy allows an exemption from the requirement to use travel cards for certain groups or individuals with poor credit histories, we found that the Navy’s practice was to facilitate Bank of America issuing travel cards—with few credit restrictions—to all applicants regardless of whether they have a history of credit problems. For the cases we reviewed, we found a significant correlation between travel card fraud, abuse, and delinquencies and individuals with substantial credit history problems. The prior and current credit problems we identified for Navy travel card holders included charged-off credit cards, bankruptcies, judgments, accounts in collections, and repeated use of NSF checks. Also, a key element of internal control, which, if effectively implemented, may reduce the risk and occurrence of delinquent accounts, is frequent account monitoring by the APC. However, some APCs, who have the key responsibility for managing and overseeing travel card holders’ activities, were essentially set up to fail in their duties. Some were assigned APC responsibilities as collateral duties and given little time to perform these duties, while other full-time APCs had responsibilities for a large number of cardholders. When an APC is unable to focus on managing travel card usage because of the high number of cardholders or the extent of other duties, the rate of delinquency and potentially abusive and fraudulent transactions is adversely affected. For example, at Camp Lejeune, where the delinquency rate was over 15 percent, the six APCs we interviewed were given the role as “other duty as assigned,” with most spending less than 20 percent of their available time to perform their APC responsibilities. In addition, a lack of management focus and priority on ensuring proper training for APCs resulted in some APCs being unfamiliar with the capabilities of Bank of America’s credit card database that would help them to manage and oversee the travel card program. For example, one APC did not know that she could access reports that would help identify credit card misuse and thus enable the responsible supervisors or commanders to counsel cardholders before they became delinquency problems. With the large span of control, minimal time allotted to perform this duty, and lack of adequate training, we found that APCs generally were ineffective in carrying out their key travel card program management and oversight responsibilities. In contrast, a Navy unit we visited—Patuxent River—showed that constant monitoring of delinquency by a knowledgeable APC contributed to a lower delinquency rate. The APC at this unit had responsibility for approximately 1,200 to 1,500 active travelers monthly, but APC duties were her only responsibility. The APC informed us that she constantly monitored the government travel card program. For example, she reviewed delinquency reports several times a month to identify and promptly alert cardholders and supervisors about the status of delinquent accounts. She also told us that less frequently, but still on a monthly basis, she monitored transactions in the Bank of America database for improper and abusive uses of the card and sent out notices to the cardholders and the cardholders’ supervisors if such transactions were identified. She also emphasized the use of the split disbursement payment process (split disbursements) whenever possible. Consequently, the delinquency rate for this unit was consistently lower than the Navy-wide rate and the civilian agency rate. Another area of weakness in internal controls relates to the process over the cancellation and/or deactivation of cards in case of death, retirement, or separation from the service. These ineffective controls allowed continued use of the government travel card for personal purposes, which in some instances led to charge-offs, thereby contributing to increased costs to the government. For example, In one Navy unit, a cardholder died in October 1999. However, ineffective controls over the notification process resulted in the APC not being aware that this had occurred. Therefore, the APC did not take actions to cancel this individual’s government travel card account. Consequently, in October 2000, when the old card was about to expire, Bank of America mailed a new card to the address of record. When the card was returned with a forwarding address, the bank remailed the card and the personal identification number used to activate the card to the new address without performing other verification procedures. The card was activated in mid- December 2000, and within a month, 81 fraudulent transactions for hotel, food, and gas totaling about $3,600 were charged to the card. In January 2001, in the course of her monthly travel card monitoring, the APC noticed suspicious charges in the vicinity of the cardholder’s post-of-duty. The APC took immediate action to deactivate the card, thus preventing additional charges from occurring. Upon learning of the cardholder’s death from further discussion with the cardholder’s unit, the APC immediately reported the case to a Bank of America fraud investigator. Investigations revealed that a family member of the cardholder might have made these charges. No payment was ever made on this account, and the entire amount was subsequently charged off. We referred this case to the U.S. Secret Service Credit Card Task Force for further investigation and potential prosecution. A chief warrant officer (W-3) at Naval Air Systems Command Atlantic repeatedly used his travel card after his retirement on December 1, 2000. The cardholder currently works for a private company. The cardholder used the government travel card, since his retirement, to make charges totaling $44,000 for hotels, car rentals, restaurants, and airline tickets. In a number of instances, the cardholder was able to obtain the government rate—which can be substantially lower than the commercial rate—for lodging in San Diego, Philadelphia, and Cincinnati. Because the Navy does not routinely monitor cardholders’ transaction reports for abusive activity and because this particular account was always paid in full, they did not detect the abusive activity. Bank of America data showed that the cardholder’s account was still open in early September 2002 and thus available for further charges. In another instance, a mechanic trainee at the Puget Sound Naval Shipyard was convicted of a felony conviction for illegal possession of a firearm in October 2000 and placed on indefinite suspension by his employer in November 2000. However, neither the security office, which took action against the employee, nor the office where the individual worked notified the APC to cancel or deactivate the cardholder’s government travel card account. Following his suspension, the cardholder used the government travel card to make numerous cash withdrawals and gas purchases totaling almost $4,700. The APC was not aware of these abusive charges until the monthly delinquency review identified the account as being delinquent. The account balance of $1,600 was subsequently charged off in January 2002. Although security officers at the Puget Sound Naval Shipyard referred the case to Navy CAF in October 2000, our work indicated as of August 2002, the suspended employee continued to maintain a secret clearance, despite the account charge-off and felony conviction. Table 2 summarizes our statistical tests of four key control activities related to basic travel transaction and voucher processing at three Navy locations. We concluded that the control was effective if the projected failure rate was from 0 to 5 percent. If the projected failure rate was from 6 to 10 percent, we concluded that the control was partially effective. We considered controls with projected failure rates greater than 10 percent to be ineffective. Although we found significant failure rates at all three case study sites for the requirement that vouchers be filed within 5 working days of travel completion, this did not have an impact on these units’ delinquency rates. However, we found substantial errors in travel voucher processing that resulted in both overpayment and underpayment of the amount that cardholders should have received for their official travel expenses. At times, these errors were substantial in comparison with the total voucher amounts. For example, we found data entry errors that resulted, in one case, in an overpayment of more than $1,700 to the traveler. In another case, failure to carefully scrutinize supporting documentation resulted in an overpayment to a traveler of more than $1,000 for cell phone calls, for which the traveler did not submit detailed documentation to support what were claimed to be calls made for business purposes. As a result of our work, the Navy unit has taken actions to recover these overpayments. DOD has taken a number of actions focused on reducing delinquencies. For example, the Department of the Navy had established a goal of a delinquency rate of no more than 4 percent. Beginning in November 2001, DOD implemented a system of wage and retirement payment offset for many employees. It also began encouraging the use of split disbursements—a payment process by which cardholders elect to have all or part of their reimbursements sent directly to Bank of America. This payment method is a standard practice of many private sector employers. Although split disbursements have the potential to significantly reduce delinquencies, this payment process is strictly voluntary at DOD. According to Bank of America, split disbursements accounted for 30 percent of total payments made by Navy employees in June 2002. This rate represented a large increase over fiscal year 2001, when only 16 percent of Navy payments were made through split disbursements. As a result of these actions, the Navy experienced a significant drop in charged-off accounts in the first half of fiscal year 2002. The Navy has also initiated actions to improve the management of travel card usage. The Navy has a three-pronged approach to address travel card issues: (1) provide clear procedural guidance to APCs and travelers, available on the Internet, (2) provide regular training to APCs, and (3) enforce proper use and oversight of the travel card through data mining to identify problem areas and abuses. Further, to reduce the risk of card misuse, the Navy has also begun to deactivate cards while travelers are not on travel status and close a number of inactive cards, and plans to close inactive cards semi-annually to eliminate credit risk exposure. The Navy is also pursuing the use of “pre-funded” debit or stored value cards for high- risk travelers—funds would be available on the cards when travel orders were issued in an amount authorized on the order. Further, the DOD Comptroller created a DOD Charge Card Task Force to address management issues related to DOD’s purchase and travel card programs. We met with the task force in June 2002 and provided our perspectives on both programs. The task force issued its final report on June 27, 2002. To date, many of the actions that DOD has taken primarily address the symptoms rather than the underlying causes of the problems with the program. Specifically, actions to date have focused on dealing with accounts that are seriously delinquent, which are “back-end” or detective controls rather than preventive controls. To effectively reform the travel program, DOD and the Navy will need to work to prevent potentially fraudulent and abusive activity and severe credit problems with the travel card. We are encouraged that the DOD Comptroller recently took action to deactivate the travel cards of all cardholders who have not been on official government travel within the last 6 months. However, additional preventive solutions are necessary if DOD is to effectively address these issues. To that end, we will be issuing a related report in this area with specific recommendations, including a number of preventive actions that, if effectively implemented, should substantially reduce delinquencies and potentially fraudulent and abusive activity related to Navy travel cards. For example, we plan to include recommendations that will address actions needed in the areas of exempting individuals with histories of financial problems from the requirement to use a travel card; providing sufficient infrastructure to effectively manage and provide day-to-day monitoring of travel card activity related to the program; deactivating cards when employees are not on official travel; taking appropriate disciplinary action against employees who commit fraud or abuse of the travel card; ensuring that information on travel card fraud or abuse of cardholders with secret or top-secret security clearances is provided to appropriate security officials for consideration in whether such clearances should be suspended or revoked; and moving towards mandating use of the split disbursement payment process. The defense authorization bill for fiscal year 2003 passed by the Senate reflected a move in this direction. This bill would change the voluntary use of split disbursements by authorizing the Secretary of Defense to require that any part of an employee’s travel allowance be disbursed directly to the employee’s travel card issuer for payment of official travel expenses. The defense authorization bill for fiscal year 2003 passed by the House does not contain comparable authority. As of September 12, 2002, the bill (H.R. 4546) was in conference. Mr. Chairman, Members of the Subcommittee, and Senator Grassley, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. For further information regarding this testimony, please contact Gregory D. Kutz at (202) 512-9505 or kutzg@gao.gov or John J. Ryan at (202) 512- 9587 or ryanj@gao.gov. We used as our primary criteria applicable laws and regulations, including the Travel and Transportation Reform Act of 1998, the General Services Administration’s Federal Travel Regulation, and the DOD Financial Management Regulations, Volume 9, Travel Policies and Procedures. We also used as criteria our Standards for Internal Control in the Federal Government and our Guide to Evaluating and Testing Controls Over Sensitive Payments. To assess the management control environment, we applied the fundamental concepts and standards in our internal control standards to the practices followed by management at our three case study locations. To assess the magnitude and impact of delinquent and charged-off accounts, we compared the Navy’s delinquency and charge-off rates to those of other DOD services and agencies and federal civilian agencies. We also analyzed the trends in the delinquency and charge-off data from the third quarter of fiscal year 2000 through the first half of fiscal year 2002. In addition, we obtained and analyzed Bank of America data to determine the extent to which Navy travel card holders wrote NSF checks to pay their travel card bills. We also obtained documented evidence of disciplinary action against cardholders with accounts that were in charge-off or salary offset status or had NSF checks written in payment of those accounts. We accepted hard copy file information and verbal confirmation by independent judge advocate general officials as documented evidence of disciplinary action. We also used data mining to identify Navy individually billed travel card transactions for audit. Our data mining procedures covered the universe of individually billed Navy travel card activity during fiscal year 2001 and the first 6 months of fiscal year 2002, and identified transactions that we believed were potentially fraudulent or abusive. However, our work was not designed to identify, and we did not determine, the extent of any potentially fraudulent or abusive activity related to the travel card. To assess the overall control environment for the travel card program at the Department of the Navy, we obtained an understanding of the travel process, including travel card management and oversight, by interviewing officials from the Office of the Undersecretary of Defense (Comptroller), Department of the Navy, Defense Finance and Accounting Service (DFAS), Bank of America, and the General Services Administration, and by reviewing applicable policies and procedures and program guidance they provided. We visited three Navy units to “walk through” the travel process, including the management of travel card usage and delinquency, and the preparation, examination, and approval of travel vouchers for payment. We also assessed actions taken to reduce the severity of travel card delinquencies and charge-offs. Further, we contacted one of the three largest U.S. credit bureaus to obtain credit history data and information on how credit scoring models are developed and used by the credit industry for credit reporting. To test the implementation of key controls over individually billed Navy travel card transactions processed through the travel system—including the travel order, travel voucher, and payment processes—we obtained and used the database of fiscal year 2001 Navy travel card transactions to review random samples of transactions at three Navy locations. Because our objective was to test controls over travel card expenses, we excluded credits and miscellaneous debits (such as fees) from the population of transactions used to select a random sample of travel card transactions to audit at each of the three Navy case study units. Each sampled transaction was subsequently weighted in the analysis to account statistically for all charged transactions at each of the three units, including those that were not selected. We selected three Navy locations for testing controls over travel card activity based on the relative amount of travel card activity at the three Navy commands and at the units under these commands, the number and percentage of delinquent accounts, and the number and percentage of charged-off accounts. Each of the units within the commands was selected because of the relative size of the unit within the respective command. Table 3 presents the sites selected and the universe of fiscal year 2001 transactions at each location. We performed tests on statistical samples of travel card transactions at each of the three case study sites to assess whether the system of internal control over the transactions was effective, as well as to provide an estimate, by unit, of the percentage of transactions that were not for official government travel. For each transaction in our statistical sample, we assessed whether (1) there was an approved travel order prior to the trip, (2) the travel voucher payment was accurate, (3) the travel voucher was submitted within 5 days of the completion of travel, and (4) the travel voucher was paid within 30 days of the submission of an approved travel voucher. We considered transactions not related to authorized travel to be abuse and incurred for personal purposes. The results of the samples of these control attributes, as well as the estimate for personal use—or abuse—related to travel card activity, can be projected to the population of transactions at the respective test case study site only, not to the population of travel card transactions for all Navy cardholders. Table 4 shows the results of our test of the key control related to the authorization of travel (approved travel orders were prepared prior to dates of travel). Table 5 shows the results of our test for effectiveness of controls in place over the accuracy of travel voucher payments. Table 6 shows the results of our tests of two key controls related to timely processing of claims for reimbursement of expenses related to government travel—timely submission of the travel voucher by the employee and timely approval and payment processing. To determine if cardholders were reimbursed within 30 days, we used payment dates provided by DFAS. We did not independently validate the accuracy of these reported payment dates. We briefed DOD managers, Navy managers, including the Assistant Secretary of the Navy (Financial Management and Comptroller) officials, unit commanders, and APCs; and Bank of America officials on the details of our audit, including our findings and their implications. We incorporated their comments where appropriate. We did not audit the general or application controls associated with the electronic data processing of Navy travel card transactions. We conducted our audit work from January 2002 through September 2002 in accordance with U.S. generally accepted government auditing standards, and we performed our investigative work in accordance with standards prescribed by the President’s Council on Integrity and Efficiency. Following this testimony, we plan to issue a report, which will include recommendations to DOD and the Navy for improving internal controls over travel card activity. Tables 7, 8, and 9 show the grade, rank (where relevant), and the associated basic pay rates for fiscal year 2001 for the Navy’s and Marine Corp’s military personnel and civilian personnel. Table 12 shows cases of travel card use for personal expenses where the cardholder paid the bill.
This testimony discusses the Department of the Navy's internal controls over the government travel card program. The Navy's average delinquency rate of 12 percent over the last 2 years is nearly identical to the Army's, which has the highest delinquency rate in the Department of Defense, and 6 percentage points higher than that of federal civilian agencies. The Navy's overall delinquency and charge-off problems, which have cost the Navy millions in lost rebates and higher fees, are primarily associated with lower-paid, enlisted military personnel. In addition, lack of management emphasis and oversight has resulted in management failure to promptly detect and address instances of potentially fraudulent and abusive activities related to the travel card program. During fiscal year 2001 and the first 6 months of fiscal year 2002, over 250 Navy personnel might have committed bank fraud by writing three or more nonsufficient fund checks to Bank of America, while many others abused the travel card program by failing to pay Bank of America charges or using the card for inappropriate transactions such as for prostitution and gambling. However, because Navy management was often not aware of these activities, disciplinary actions were not consistently taken against these cardholders. GAO also found a significant relationship between travel card fraud, abuse, and delinquencies and individuals with substantial card history problems. Many cardholders whose accounts were charged off or put in salary offset had bankruptcies and accounts placed in collection prior to receiving the card. The Navy's practice of authorizing a travel card to be issued to virtually anyone who asked for it compounded an already existing problem by giving those with a history of bad financial management additional credit. Although GAO found that Navy management had taken some corrective actions to address delinquencies and misuse, additional preventive solutions are necessary if Navy is to effectively address these issues.
Ex-Im operates under the authority of the Export-Import Bank Act of 1945, as amended. It is an independent agency of the U.S. government. Ex- Im’s mission is to support jobs in the United States by facilitating the export of U.S. goods and services. In fiscal year 2012, Ex-Im authorized about $35.8 billion, for 3,796 transactions, to support U.S. exports. Ex-Im is part of the U.S. Trade Promotion Coordinating Committee, an interagency committee chaired by Commerce and tasked with coordinating the export promotion and financing activities of federal agencies. Ex-Im is also a key participant in the National Export Initiative, a strategy announced in 2010 to double U.S. exports by 2015 to support U.S. employment. Ex-Im provides four types of financing: direct loans, loan guarantees, working capital guarantees, and export credit insurance. Direct loans: Medium- and long-term fixed-rate loans Ex-Im provides directly to foreign buyers of U.S. goods and services. Loan guarantees: Medium- and long-term loan guarantees to lenders that Ex-Im will pay the lender if the foreign buyer of U.S. goods and services, who received the loan, defaults. Working capital guarantees: Guarantees to lenders for U.S.-based companies to obtain short-term loans that facilitate the export of goods and services. Working capital guarantee loans may be approved for a revolving line of credit that supports multiple export sales or a single loan that supports a specific export contract. Insurance: Short- and medium-term insurance Ex-Im provides to U.S. exporters to protect them against the risk of nonpayment by foreign buyers for commercial or political reasons. To balance the interests of multiple stakeholders and Ex-Im’s mission to support U.S. jobs through export financing, Ex-Im has a domestic content policy regarding the amount of U.S. content directly associated with the goods and services exported from the United States. Ex-Im’s short-term transaction content policy requires at least 50 percent U.S. content. For medium- and long-term transactions, there is no minimum U.S. content requirement to receive a portion of financing, but Ex-Im’s support is limited to the lesser of (1) 85 percent of the total value of all eligible goods and services in the U.S. export transaction, or (2) 100 percent of the value of the domestic content in all eligible goods and services in the U.S. export transaction. To be eligible for support, goods must be shipped from the United States. Other industrial countries have their own export credit agencies. For example, the other Group of Seven (G-7) countries all have at least one export credit agency. G-7 agencies differ in the magnitude and types of their activities. All offer medium- and long-term officially supported export credits. Export credit agencies also can provide other products and services that can complicate comparisons among institutions. Ex-Im’s mission emphasizing supporting domestic jobs through exports is unique among the G-7 agencies. Ex-Im’s charter states that the bank’s objectives are to contribute to maintaining or increasing the employment of U.S. workers by financing and facilitating exports through loans, guarantees, insurance, and credits. This mission underlies certain Ex-Im policies, such as its economic impact analysis requirement and its domestic content policy. Other export credit agencies’ missions range from promoting and supporting domestic exports to securing natural resources. To estimate the number of U.S. jobs associated with the exports it helps finance, Ex-Im uses a methodology based on the input-output approach. To apply this methodology, Ex-Im uses a BLS data product, known as employment requirements tables (ERT) which are based on the input- output methodology. It is important to understand the ERT because these tables play an essential role in Ex-Im’s jobs calculation process. The ERT provide the total number of jobs (on average) supported by production in each industry. These BLS data allow Ex-Im to produce a measure that translates the value of the exports it supports in each industry into an employment estimate for that industry. In order to use the ERT, Ex-Im must rely on data from its own system. Ex-Im’s four-step process estimates the value of all the exports it supports by the industries associated with those exports. By combining data from the ERT and its own system and aggregating across industries, Ex-Im produces an estimate of the total jobs its financing supported. The methodology Ex-Im uses relies on a basic input-output approach. According to Ex-Im and Commerce officials, the basic input-output approach was designated as the standard for U.S. government agencies by the Trade Promotion Coordinating Committee and has the advantage of generating a uniform jobs calculation methodology across the federal government. The logic underlying the input-output modeling approach assumes that the production of goods and services in an economy uses inputs (such as labor) in fixed proportions. Consequently, it is possible to determine the quantity of labor required for a given level of production. To apply this methodology, Ex-Im uses the ERT, data tables created by BLS, to estimate the number of jobs associated with the specific value of exports Ex-Im supports, rather than the value of total U.S. exports. The ERT are derived from a set of data showing the relationship between industries, known as input-output tables. For researchers using an input- output approach, the ERT can be used for analyses that attempt to estimate the employment effects of exports. BLS develops the ERT so that users can analyze the job impact of various types of expenditures, such as exports. The ERT contain, for 195 industries, the number of jobs required to produce one million dollars of value in each industry (this report refers to this factor as the “jobs ratio”). Because industries may vary widely in how many jobs they support per million dollars of expenditure, it is important for Ex-Im to correctly identify the industry associated with each export transaction it finances. BLS produces two types of ERT, one that includes the employment effects of both domestic and imported production and another that removes the employment effects of imports so that only domestic production is captured. Ex-Im uses the domestic ERT to estimate the number of U.S. jobs associated with its exports. While annual versions of the ERT are produced, the most current year available as of May 2013 is 2010. Using the ERT, it is possible to obtain either the jobs supported directly in a particular industry, or in a particular industry plus the industries that support its production. For example, construction directly supports jobs in the construction industry but also indirectly supports jobs in industries that supply the material necessary for construction, such as the steel industry. Ex-Im uses the value that also includes employment in supporting industries, which produces a larger jobs ratio. Sometimes this larger estimate is called the “direct plus indirect effect” or “supply chain.” Ex-Im’s process for using the ERT has four steps. First, it determines the industry associated with each transaction. In some cases, there could be multiple industries associated with a transaction, if Ex-Im financed multiple products in the transaction. Second, it determines the total value of exports Ex-Im supports for each industry. Third, it multiplies these export values by BLS’s jobs ratio for each industry to obtain the jobs for that industry. Finally, it aggregates across all industries to produce an overall estimate. Figure 1 depicts each step of the process. In step 1, Ex-Im either uses the industry code provided by the applicant (the exporter or the lender) or relies on its engineers (whom Ex-Im considers its in-house industry experts) to identify the appropriate North American Industry Classification System (NAICS) code for the contracts associated with each transaction Ex-Im finances or supports. Ex-Im translates its data on transactions into the same industry groups (i.e., NAICS-based codes) used by BLS. The method by which Ex-Im obtains the NAICS code varies by length of repayment term. For short- and medium-term financing and working capital credit, the applicant (either the exporter or the lender) provides the NAICS code. For long-term financing, Ex-Im engineers work with the exporters and project sponsor to determine the NAICS code. According to Ex-Im officials, to verify and assign NAICS codes in long-term financing, Ex-Im uses both the guidance provided by the codebook for assigning NAICS codes and the experience of the engineer. In step 2, Ex-Im estimates a dollar value of exports it supports, referred to as the export value. It does this for each transaction it finances. However, according to Ex-Im officials, because Ex-Im provides different types of financial products, it uses two different methods to determine the export values. 1. For some financial products, such as direct loans and loan guarantees, Ex-Im determines the export value after authorization— but before disbursement—by using information provided on the exporter’s certificate. Specifically, Ex-Im determines the export value by using the net contract price—the aggregate price of all goods and services to be exported (i.e., U.S. content plus eligible foreign content that does not include local costs). Ex-Im includes the value of the purchase of goods and services that were financed by entities other than Ex-Im. In other words, the export value is the value of exports in purchase orders that were at least partially financed by Ex-Im. According to Ex-Im officials, they generally provided approximately 83 percent of the financing for medium and long-term transactions for fiscal year 2010 through fiscal year 2012. 2. For other financial products, such as short-term insurance or working capital, Ex-Im uses the entire value of the credit or the insurance policy as the proxy for the export value. Because the export value is not known at the time of authorization, Ex-Im cannot use the net contract price to determine the export value. These products include revolving lines of credit that may be drawn down multiple times during the available period. Under this type of support, a domestic exporter can access the credit to make purchases and later repay the debt, thereby making additional credit available. According to Ex-Im, this approach may result in an understatement of the total value of the exports, as multiple purchases can occur without ever reaching the limit. However, Ex-Im also confirmed that using the entire value of the credit or insurance policy could result in an overstatement, if all the credit is not used. At the end of step 2, Ex-Im creates a summary table, where each row contains the sum of export value in an industry. In step 3, Ex-Im multiplies the export values for each industry by the appropriate jobs ratio from the ERT. Finally, in step 4 it sums across all of the industries to obtain a single estimate for the number of jobs it supports. Using this process, Ex-Im estimated 255,000 jobs supported in 2012. To illustrate, on average, Ex-Im used the following steps: Ex-Im determined that it supported approximately $40 billion of exports. On average, in fiscal year 2012, every million dollars of exports supported by Ex-Im was associated with 6.5 jobs (based on the industries that used Ex-Im financing, and the ERT). Finally, multiplying approximately 40 billion dollars of exports by 6.5 jobs per million results in approximately 255,000 jobs. In order to verify our understanding of Ex-Im’s jobs calculation process, we obtained the individual transaction level data from Ex-Im, including the export values and industry codes for each transaction. We then merged that data with the most recent ERT from the BLS and summed across all transactions. Using this data, we were able to obtain close to Ex-Im’s exact value for the total number of jobs supported, thus confirming the process that Ex-Im described to us. For more detail about our analysis, see appendix I. The basic methodology used by Ex-Im has recognized limitations, and Ex-Im also makes certain assumptions about its data. However, in its reports, Ex-Im does not describe limitations or fully detail assumptions that are inherent to the methodology. As a result, stakeholders may not fully understand what the job number represents or how to interpret it in the proper context. Although the input-output approach on which the ERT are based is a commonly used methodology, this approach has several limitations. Some of these limitations are inherent to the ERT. Additional limitations result from assumptions Ex-Im makes about its data on the industry codes and export values for the export transactions it finances. The limitations specific to the ERT are outside of Ex-Im’s direct control. For example, officials from Commerce and Ex-Im said that the data in the ERT cannot be used to distinguish between jobs that were newly created and those that were maintained. The ERT simply show the direct and indirect (also known as supply chain) employment per $1 million of sales of goods to final users for each commodity, not whether these are “jobs created” (employing previously unemployed people or people out of the labor force, such as students), or “jobs maintained” (continuing pre- existing employment). According to BLS officials, it would be challenging to find data that can distinguish between newly created and maintained jobs. Obtaining data detailed enough to allow a researcher to make that distinction would require many more resources than are currently available to BLS, according to these officials. They added that this is a general limitation of the input-output methodology, upon which the ERT are based, and which is a standard methodology used to calculate average employment and other inputs needed for a certain level of production. Because of the lack of specificity and limitations, Ex-Im officials report that the jobs are “associated with” or “supported by” Ex-Im financing. Moreover, the documentation accompanying the ERT also describes several limitations and assumptions to those data, including the following: The employment data are a count of jobs, not of persons employed, and treat full-time, part-time, and seasonal jobs equally. Persons who hold multiple jobs show up multiple times in the employment data. The age of the data underlying the ERT is a general limitation of BLS’s employment requirements tables. The ERT incorporate a large amount of data, which takes time to collect and put in the ERT framework, according to BLS officials. Ex-Im is using the latest available ERT, the 2010 ERT; however the industry relationships that the ERT are based on come from 2002 data from BEA. BLS officials stated that the current economy may be very different from the economy in 2002, and the relationships reflected in the latest available ERT are a decade old. BLS officials acknowledged, however, these data are the best currently available for Ex-Im to use. Furthermore, the ERT data assume average industry relationships; however Ex-Im’s clients could be different than the typical firm in the same industry. For example: The ERT that are adjusted to reflect only domestic employment assume that each industry’s share of domestic versus international use of a particular input is constant across industries. For example, these ERT assume that the automobile industry uses the same proportion of imported steel as the construction industry. Because of Ex-Im’s domestic content policy, agency officials said that Ex-Im does not consider the exports supported by its financing to contain the same level of imports as the industry averages. Ex-Im officials agreed that this is a limitation but said that using BLS’s adjusted ERT helps ensure that imported content is accounted for to some extent. Ex-Im officials told us they had not assessed the extent to which this limitation affects the overall jobs estimate. In addition, officials from Export Development Canada and Ex-Im and a trade policy researcher said that using input-output methodology to calculate employment estimates for specific transactions is also a limitation, since a particular export may be different than the average for that industry. The ERT also exclude the impact of spending that results from income generated by Ex-Im supported jobs, sometimes called the multiplier effect. For example, an increase in employment in a factory may result in employment at a nearby restaurant. According to BLS, including these additional consumer expenditures would result in a larger employment impact. Some limitations stem from Ex-Im’s process for determining the industry and export value. As discussed previously, during step 1 (as shown in fig. 1), Ex-Im determines the industry associated with each transaction. However, in some cases, Ex-Im has been unable to determine the industry code. In cases where the NAICS code is missing for transactions, Ex-Im has used the average across all of its other industries as the jobs ratio. In almost all of those cases we identified with missing NAICS codes (that had positive export values), the type of support was short-term insurance. According to Ex-Im, in short-term insurance, the lender may not know at the time of authorization which exporter will benefit from the insurance coverage, and this may explain why the NAICS code is not identified. Ex-Im’s jobs calculation methodology is also sensitive to certain assumptions about how it determines the export value based on its financing. For example, as discussed previously in step 2, using the authorized amount as the export value for short-term insurance transactions could overstate or understate the actual export value. In addition, according to Ex-Im officials, the export value includes the value of the purchase of goods and services that were financed by entities other than Ex-Im. Finally, according to government officials and trade policy researchers, the methodology that Ex-Im uses does not answer the question of what would have happened without Ex-Im financing. A Commerce report and trade policy researchers we consulted noted that in a high unemployment economy, additional exports may result in additional jobs. However, in a low unemployment economy, additional exports may result in jobs shifting from one firm to another, without an increase in total employment. Ex-Im reports the number of jobs its financing supports and the methodology it uses but does not describe the limitations or fully detail the assumptions related to its data or methodology. Ex-Im first reported the total number of jobs it supports in its 2010 annual report and started providing an overview of its methodology in its 2011 report. The 2012 report states that the Trade Promotion Coordinating Committee identified this basic methodology as the official U.S. government calculation of jobs supported through exports. The report further states that Ex-Im uses the latest available domestic ERT from BLS (which is based on input-output tables from BEA), National Income and Product Accounts data (also from BEA), and BLS industry employment data to calculate the number of jobs associated with Ex-Im supported exports of goods and services. Ex-Im has also reported the number of jobs it supports in various other documents, including reporting to comply with the Government Performance and Results Act, the Chairman’s statements to Congress, its website, and press releases. Some press releases that announce new transactions also state the number of jobs associated with a specific transaction. Most of the press releases we reviewed provide only a brief statement about how Ex-Im calculates its job estimate. For example, an October 2, 2012, press release announcing $105 million in financing for an aquarium in Brazil states: “The transaction will support approximately 700 American jobs, according to bank estimates derived from Departments of Commerce and Labor data and methodology.” Ex-Im officials told us they use the results of its jobs calculations for reporting purposes only. According to Ex-Im officials, Ex-Im calculates the number of jobs supported for the transactions reviewed by Ex-Im’s Board of Directors, at the request of one of its board members. Ex-Im board members stated that the purpose of reporting these numbers is to give Congress a sense of the employment effects of Ex-Im activities; they do not use them for decision making. Board members also told us that the chief consideration when making a financing decision is the credit worthiness of the firm. Officials stated that they do not make decisions based on how many jobs would be supported by a particular transaction. However, none of Ex-Im’s reporting discusses limitations or fully details the assumptions in its data or in the methodology it uses. Most of the limitations and assumptions are not specific to Ex-Im, but are common to the methodology. For example, Ex-Im’s brief discussion of the methodology in its 2012 annual report does not explain that the methodology does not allow it to differentiate between the number of new jobs that were created and the number of jobs maintained as a result of its financing. In addition, Ex-Im does not specify that jobs associated with the multiplier effect are not captured in its jobs estimates. Further, the report does not state that the employment estimate is an overall count of jobs, not full-time equivalents. Thus, the number of jobs that Ex-Im says it supports can include part-time and seasonal jobs. Similarly, its press releases that include the number of jobs associated with a specific transaction also do not state the limitations and assumptions associated with the methodology. Officials said that, in reporting the number of jobs associated with Ex-Im financing, they clearly state that it is an estimate. Because it is a nonfinancial and unaudited number, the caveat of “estimate” seemed sufficient, according to Ex-Im officials. According to GAO’s Standards for Internal Control in the Federal Government, effective communications should occur in a broad sense with information flowing down, across, and up the organization. Management should ensure there are adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant impact on the agency achieving its goals. By not including more information in its report, Ex-Im does not allow readers, including Congressional and public stakeholders, to fully understand what the jobs number represents or how to interpret it in the proper context. Although alternative methodologies may address some of the limitations in Ex-Im’s jobs calculation methodology, these alternatives have their own limitations. Trade policy researchers we spoke to suggested alternative methodologies that Ex-Im could potentially use to calculate the effects of its financing on employment. However, these methodologies have their own limitations, such as not including the effects of Ex-Im financing on indirect jobs (the supply chain) and would require a significant amount of data collection by Ex-Im that would be time consuming, require more technical expertise, and cost more. One trade policy researcher we spoke to suggested that Ex-Im could conduct an assessment of firms that received Ex-Im financing in comparison to firms that did not receive Ex-Im financing. This approach, using firm-specific data, could potentially estimate whether the jobs would have existed without Ex-Im financing. For example, the German Ministry of Economics and Technology commissioned a study by the University of Munich on the employment effects of the export credit guarantees provided by the German export credit agency. This 6-month study used econometric analysis to examine firm-level data while taking into account other potential causes of export success and found that the German export credit agency’s guarantees had increased exports and created jobs. According to their report, their estimate of jobs created using this approach was comparable to estimates derived from an input-output approach. However, Ex-Im officials noted that the type of data used in the German study may not be readily available in the United States to Ex-Im. Another trade policy researcher suggested a different approach using firm-level data from Census or BLS to examine job creation and destruction over time. This approach could be potentially informative of changes in the labor market not captured by a total jobs number, such as whether these are new jobs, or whether firms supported by Ex-Im are less likely to reduce employment. In contrast, the current input-output method used by Ex-Im provides a static look at the number of jobs supported by Ex-Im financing and does not show how the economy has gained or lost jobs over time. While the approach of using firm-level data may yield information about the creation and destruction of jobs, it may not yield a static estimate of the number of jobs supported. In addition, BLS officials stated that such an analysis would only identify whether a firm’s total employment increased or decreased over time, but would not identify a new set of jobs in the firm and would not control for factors other than Ex- Im financing that could cause a change in employment. An Export Development Canada official also stated that such a methodology introduces the potential for selection bias. Furthermore, Commerce officials stated that such an analysis would be too time consuming to conduct every year. Two trade policy researchers and Ex-Im and Export Development Canada officials we spoke with said that these alternative approaches that rely on firm-level data would require more resources for data collection and analysis than does Ex-Im’s current input-output based methodology. In particular, a methodology using firm-level data would require a significant amount of data collection by Ex-Im that would be time consuming, require more technical expertise, and have a monetary cost. Moreover, these alternatives may not capture the indirect (the supply chain) effect of Ex-Im financing. These trade policy researchers said that the input-output approach is appropriate given Ex-Im’s limited resources and how the agency uses the number of jobs supported. Export Development Canada officials said they use an input-output based approach, which also captures the indirect (the supply chain) effect, similar to the methodology used by Ex-Im to calculate the number of jobs supported by its financing. However, for insurance products, Export Development Canada uses the contracts for the exports it is supporting to calculate the export value. This approach allows this agency to capture export values that differ from authorized amounts since the authorized amount could overstate or understate the actual export value. Ex-Im officials said they lack the staff and resources to adopt Export Development Canada’s method and that Ex-Im faced some limitations with its data systems. Additionally, using the authorized value for the short-term insurance products, Ex-Im officials said, ensures that the value is only counted once in the fiscal year it was authorized and is not counted again in subsequent fiscal years. Prior to the use of the input-output based approach, Ex-Im, as well as Export Development Canada, tried to collect information on the number of jobs associated with their financing directly from the companies that received the financing. Officials from both agencies said that they had problems with the data they received from the companies. An official from Export Development Canada also said that smaller companies found this process burdensome. According to Ex-Im, surveyed firms responded in inconsistent ways, such as claiming all employed workers at a firm were supported by the exports. They also reported that since financial intermediaries or foreign buyers often submit the applications for Ex-Im financing, they do not know the jobs impact for the U.S. exporter or service provider. Moreover, any jobs-impact information from applicants does not account for indirect jobs created in the supply chain, which the input-output approach does include. Ex-Im’s primary mission is to support U.S. jobs through the exports that it finances, and it estimates the number of jobs supported by its financing in order to provide Congress and the public with a broad sense of its impact on U.S. employment. The jobs number reported by Ex-Im is an estimate, used by Ex-Im as an indicator of how the agency is fulfilling its mission to support U.S. employment. Although the methodology Ex-Im uses does not distinguish between jobs that were newly created or jobs that were maintained, its current methodology has certain advantages. For example, it is based on the input-output approach commonly used in economic analysis; it includes indirect jobs in the supply chain; and it can be performed using limited resources. Providing a precise accounting of the jobs supported by Ex-Im’s financing may not be feasible because of the complexity and cost of doing so. While trade policy researchers we consulted identified other methodologies, they also identified limitations of those methodologies. For example, these methodologies would require more resources to conduct, would be difficult to perform on a regular basis, and would not include indirect jobs in the supply chain. Nonetheless there are important limitations and assumptions that affect Ex-Im’s estimate of the number of jobs supported by its financing. While Ex-Im’s reporting includes a brief overview of its methodology, it has not included a discussion of the limitations or fully detailed the assumptions of the methodology and data. The lack of detailed reporting reduces the ability of congressional and public stakeholders to fully understand what the jobs number represents and the extent to which Ex-Im’s financing may have affected U.S. employment. To ensure better understanding of its jobs calculation methodology, the Chairman of Ex-Im Bank should increase transparency by improving reporting on the assumptions and limitations in the methodology and data used to calculate the number of jobs Ex-Im supports through its financing. We provided a draft of this report to Ex-Im, Commerce, and Labor for comment. We also provided relevant sections to Export Development Canada for technical comment. In its written comments, which are reproduced in appendix II, Ex-Im stated that it agrees with GAO’s recommendation and will provide greater detail on the assumptions and limitations associated with its jobs calculation methodology. Ex-Im further stated that it will begin implementation of the recommendation this fiscal year with its 2013 annual report, which will include greater information on the assumptions and limitations of its methodology. Ex-Im will provide this information in annual reports and on its website. Commerce stated that it agrees with GAO’s recommendation for improved reporting on how Ex-Im calculates the number of jobs that are supported by exports for which it provides financing. Commerce also recommended that Ex-Im make it clear that its jobs estimate is indicative of jobs supported by Ex-Im financing and is different than the estimate of jobs supported by total U.S. exports that Commerce publishes as the official estimate of the U.S. government. Commerce’s comments are reproduced in appendix III. Export Development Canada stated that it recognizes the deficiencies in the input-output approach, but that it believes that compared with other potential methodologies, this approach provides the best solution. According to Export Development Canada, the input-output approach uses a simple method to capture the indirect impact of the supply chain on domestic employment. In addition, Export Development Canada said that while using firm-level data to estimate the effect of financing might offer other insights, it would also be complex to analyze, and could introduce another bias. Further, Export Development Canada said that in its experience, surveying firms directly may not lead to reliable information, and also could be burdensome to smaller firms. Ex-Im, Commerce, and Export Development Canada also provided technical comments that were incorporated, as appropriate. We received no comments from Labor. We are sending copies of this report to interested congressional committees, the Chairman of the Export-Import Bank of the United States, and the Secretaries of Commerce and Labor. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. The objectives of this report were to (1) describe the methodology and processes the Export-Import Bank of the United States (Ex-Im) uses to calculate the effects of its financing on employment in the United States, (2) examine the limitations of Ex-Im’s approach and how Ex-Im reports on its methodology, and (3) describe alternative methodologies and their limitations. To describe the methodology and processes Ex-Im uses to calculate the effects of its financing on employment in the United States, we interviewed Ex-Im staff involved with producing the estimate and reviewed descriptions of the estimate in the most recent annual reports and other documentation provided by Ex-Im. Because Ex-Im’s method uses the Bureau of Labor Statistics’ (BLS) employment requirements tables (ERT), we interviewed BLS staff and reviewed technical documentation on the ERT. In addition, we reviewed the Microsoft Excel spreadsheet Ex-Im uses to perform the estimate, by examining formulas in the sheet used to produce the estimate. Because Ex-Im provided the underlying raw data used by the spreadsheet, we were able to combine its data with the ERT data and replicate Ex-Im’s jobs estimate by the following steps. First, we downloaded the ERT directly from the BLS website. Then, we merged the ERT with the raw data provided by Ex-Im, by industry. Finally, we multiplied the jobs ratio by Ex-Im data’s export value (for the appropriate industry), and aggregated across transactions. Following this procedure, we obtained a value close to Ex-Im’s exact value for a jobs estimate. Replicating Ex-Im’s estimate helped to verify that Ex-Im followed the process and used the specific ERT that it stated it did, and that all of the raw data were reflected in its jobs estimate. We performed our replication using SAS, a computer program distinct from Excel. Based on our interviews with knowledgeable agency officials, review of relevant documentation, and replication of Ex-Im’s calculation, we determined the data were sufficiently reliable for the purposes of our report. To examine the limitations of Ex-Im’s approach, how Ex-Im reports on its methodology, and alternative methodologies, we reviewed relevant documentation related to Ex-Im, including recent annual reports, descriptions of Ex-Im’s jobs calculation methodology, and press releases that included information on jobs supported by Ex-Im financing. We also reviewed recent GAO reports on Ex-Im and export credit agencies, and literature related to input-output methodology. In addition, we interviewed Ex-Im officials from various divisions of the organization about how they calculate the number of jobs supported by Ex-Im’s financing, how they obtain data about Ex-Im’s transactions, and how the jobs number is used. We also interviewed officials from the BLS at the Department of Labor to discuss the employment requirements tables (ERT). In addition, we interviewed officials from the Department of Commerce, specifically from the Bureau of Economic Analysis—which develops the data in the input- output tables that BLS uses in its ERT—and from the International Trade Administration—which also calculates the number of jobs supported by U.S. exports overall. We reviewed relevant documentation from these agencies such as technical documentation on the ERT. We also spoke with officials from four other countries’ export credit agencies to obtain information on their efforts to determine the number of jobs associated with their financing, including the export credit agencies of Canada, Japan, France, and the United Kingdom. We selected these countries’ export credit agencies because GAO had consulted with them on prior engagements based on their similarities to Ex-Im. We obtained information on a study that analyzed the employment effects of Germany’s export credit agency as an example of an alternative methodology. We met with three selected trade policy researchers to obtain their perspectives on Ex-Im’s methodology and discuss potential alternative methodologies to calculate the effect of Ex-Im’s financing on employment. We selected these researchers because GAO had consulted with them on prior engagements related to export credit agencies based on their knowledge of the issues, or they had been recommended to us through interviews with knowledgeable government officials due to their expertise in the area. In addition, we reviewed GAO’s Standards for Internal Control in the Federal Government to assess Ex- Im’s communication regarding its jobs calculation methodology. We conducted this performance audit from August 2012 to May 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the person named above, Jose Alfredo Gomez (Director), Juan Gobel (Assistant Director), Christina Werth, Rachel Girshick, and Benjamin Bolitzer made key contributions to this report. Also contributing to this report were Karen Deans, Susan Offutt, Martin de Alteriis, Etana Finkler, Robert Alarapon, and Ernie Jackson.
Ex-Im provides loans, guarantees, and insurance to U.S. exporters. One of Ex-Im's primary missions is to support U.S. jobs through exports. In its 2012 annual report, Ex-Im stated that its financing helped support an estimated 255,000 export-related U.S. jobs. In 2012, Congress passed the Export-Import Bank Reauthorization Act of 2012. The act required GAO to report on the process and methodology used by Ex-Im to calculate the effects of export financing on U.S. employment. This report (1) describes the methodology and processes Ex-Im uses to calculate the effects of its financing on U.S. employment and (2) examines the limitations of Ex-Im's approach and how Ex-Im reports on its methodology, and provides additional related information. To address these objectives, GAO reviewed relevant Ex-Im documents, obtained and reviewed the data Ex-Im uses for its calculations, and interviewed agency officials and trade policy researchers. The U.S. Export-Import Bank's (Ex-Im) methodology to calculate the number of U.S. jobs associated with the exports it helps finance has four key steps. First, Ex-Im determines the industry associated with each transaction it finances. Second, Ex-Im calculates the total value of exports it supports for each industry. Ex-Im implements these first two steps using its own data. Third, Ex-Im multiplies the export value for each industry by the Bureau of Labor Statistics (BLS) ratio of jobs needed to support $1 million in exports in that industry--a figure known as the "jobs ratio." Finally, Ex-Im aggregates across all industries to produce an overall estimate. Ex-Im reports the number of jobs its financing supports and the methodology it uses but does not describe limitations of the methodology or fully detail its assumptions. Although the BLS data tables that Ex-Im relies on are based on a commonly used methodology, this methodology has limitations. For example, the employment data are a count of jobs that treats full-time, part-time, and seasonal jobs equally. In addition, the data assume average industry relationships, but Ex-Im's clients could be different from the typical firm in the same industry. Further, the underlying approach cannot answer the question of what would have happened without Ex-Im financing. Ex-Im does not report these limitations or fully detail the assumptions related to its data or methodology. GAO's Standards for Internal Controls in the Federal Government states that, in addition to internal communication, management should ensure adequate communication with external stakeholders, which could include Congress and the public. Because of a lack of reporting on the assumptions and limitations of its methodology and data, Congressional and public stakeholders may not fully understand what the jobs number that Ex-Im reports represents and the extent to which Ex-Im's financing may have affected U.S. employment. To ensure better understanding of its jobs calculation methodology, GAO recommends that Ex-Im improve reporting on the assumptions and limitations in the methodology and data used to calculate the number of jobs Ex-Im supports through its financing. Ex-Im agreed with the recommendation and stated that it would begin reporting more detailed information in its fiscal year 2013 annual report.
ACI’s estimate of planned capital development costs is considerably larger than FAA’s because it reported a broader base of projects. According to FAA’s estimate, which includes only projects that are eligible for AIP grants, the total cost of airport development will be about $41 billion, or about $8.2 billion per year for 2007 through 2011. (See table 1.) ACI estimates annual costs of about $78 billion, or about $15.6 billion per year, for the same period. These estimates differ mainly because ACI’s estimate includes all future projects that may or may not have an identified funding source or be eligible for federal funding and also because they are based on different estimating approaches. Projects that are eligible for AIP grants include runways, taxiways, and noise mitigation and reduction efforts; projects that are not eligible for AIP funding include parking garages, hangars, and expansions of commercial space in terminals. Several factors account for the differences between the FAA and ACI estimates of future development costs. The biggest difference stems from ACI’s inclusion of projects that are not eligible for AIP grants, while FAA’s estimate includes only AIP-eligible projects (see table 2). However, even when comparing just the AIP-eligible portions of the respective estimates, ACI’s estimate is 20 percent ($8 billion in total or $1.6 billion annually) greater. This points to differences in how the two estimates are formed. One difference is the estimating approach. FAA’s estimates cover projects for every airport in the national system, while ACI surveyed the 100 largest airports (mostly large and medium hub airports) and then extrapolated a total based on cost per enplanement calculations for small, medium, and large hub airports that did not respond. Further analysis on a project-by-project level shows variances related to three other factors: Definition—FAA data are based on planned project information taken from airport master plans and state system plans, minus projects that already have an identified funding source, while ACI includes all projects, whether funding has been identified or not. For example, ACI’s estimate for Washington Dulles airport includes $278 million for an automated people mover, but FAA’s estimate does not because it is being funded by a PFC approved in 2006. Measurement—FAA data include only the portion of a project that is eligible for AIP, while ACI estimates the total value project cost. On a terminal construction project at Dulles International Airport, ACI estimated total costs of $1.6 billion for construction; however, only a small portion is eligible for AIP funding. FAA did not report any amount because under FAA AIP rules only a small portion ($20 million) was eligible for AIP funding and the airport had exhausted the AIP funds that could be used for this type of project. Timing—The ACI and FAA estimated planned development costs for the same five year time period, but the estimates were made at different times—the ACI survey was completed in early 2007, while FAA’s estimate is based on information collected in early 2006. Further, the ACI estimate includes projects that FAA does not believe will be commissioned during the next 5 years. At Fort Lauderdale International Airport, for example, ACI reported a $700 million runway project but FAA reports less than $200 million for the same project. According to FAA, the remaining costs are beyond 2011. FAA and ACI estimates do not consider cost increases such as rising construction costs. Going forward these costs may increase, especially construction costs which have jumped 26 percent in 30 major U.S. cities over the past three years. Industry experts predict that construction costs will continue to increase project costs. FAA acknowledges that development estimates may or may not include increase in costs based on construction uncertainty and that annual costs increases are not captured. From 2001 to 2005, the 3,364 active airports that make up the national airport system received an average of about $13 billion per year for planned capital development from a variety of funding sources. These funds are used for both AIP-eligible and ineligible projects. The single largest source of these funds was bond proceeds, backed primarily by airport revenues, followed by AIP grants, PFCs, and state and local contributions (see table 3). The amount and source of funding vary with the size of airports. The nation’s 67 larger airports, which handled almost 90 percent of the passenger traffic in 2005, accounted for 72 percent of all funding ($9.4 billion annually), while the 3,297 other smaller commercial and general aviation airports that make up the rest of the national system accounted for the other 28 percent ($3.5 billion annually). As shown in figure 1, airports’ reliance on federal grants is inversely related to their size—- federal grants contributed a little over $1.3 billion annually to larger airports (14 percent of their total funding) and $2.3 billion annually to smaller airports (64 percent of their total funding). Based on past funding levels, airports’ funding is about $1 billion per year less than estimated planned capital development costs. If the $13 billion annual average funding continues over the next 5 years and were applied only to AIP-eligible projects, it would cover all of the projects in FAA’s estimate. However, much of the funding available to airports is for AIP- ineligible projects that can attract private bond financing. We could not determine how much of this financing is directed to AIP-eligible versus ineligible projects. Figure 2 compares the $13 billion average annual funding airports received from 2001 through 2005 (adjusted for inflation to 2006 dollars) with the $14 billion in annual planned development costs for 2007 through 2011. The $14 billion is the sum of FAA’s estimated AIP- eligible costs of $8.2 billion annually and ACI’s estimated ineligible costs of $5.8 billion annually. The overall difference of about $1 billion annually is not an absolute predictor of future funding shortfalls; both funding and planned development may change in the future. The difference between current funding and planned development costs for larger airports is about $600 million if both AIP-eligible and ineligible projects are considered. From 2001 through 2005, larger airports collected an average of about $9.4 billion a year for capital development, as compared to over $10 billion in annual planned development costs. Figure 3 shows the comparison of average annual funding versus planned development costs for larger airports. At $5.7 billion annually, the ineligible portion of costs is 57 percent of the total planned development costs. The difference between past funding and planned development costs for smaller airports is roughly $400 million annually. At smaller airports, average annual funding from 2001 through 2005 was about $3.6 billion a year (expressed in 2006 dollars). Annual planned development costs for smaller airports from 2007 through 2011 is estimated at about $4 billion. Figure 4 compares average annual funding to planned development costs. As the figure shows, the portion of smaller airports’ project costs not eligible for AIP funding is relatively small—about $75 million annually, or about 2 percent of total planned development costs. The financial health of airports is strong and has generally improved since September 11, 2001, especially for larger airports. Passenger traffic has rebounded to 2000 levels and bond ratings have improved. Following September 11, many airports cut back on their costs and deferred capital projects. However, credit rating agencies and financial experts now agree that larger airports are generally financially strong and have ready access to capital markets. A good indicator of airports’ financial strength is the number and scale of underlying bond ratings provided by bond rating agencies. More bonds were rated in 2007 than 2002, and more bonds are rated at the higher end of the rating scale in 2007, meaning that the rating agencies consider them less of a risk today. Furthermore, larger airports tended to have higher ratings than smaller airports. The administration’s reauthorization proposal for AIP would increase funding for larger airports, but its effect on smaller airports is uncertain because of the overall reduction in AIP and the proposed changes in how AIP grants are allocated between larger and smaller airports. The 2008 fiscal year budget reduces AIP funding from its past level of $3.5 billion in fiscal years 2006 and 2007 to $2.75 billion in 2008. The proposal also would eliminate entitlement, otherwise known as apportionment, grants for larger airports while increasing the PFC ceiling from $4.50 to $6 per passenger. While larger airports that account for 90 percent of all passengers will come out ahead, an increased PFC may not compensate smaller airports for the overall reduction in AIP, even with the proposed changes in how AIP is allocated between larger and smaller airports. As a separate issue, the administration’s reauthorization proposal would change the way that AIP and other FAA programs are funded and may not provide enough monies for these programs, even at the reduced levels proposed by the administration. The administration’s 2008 FAA reauthorization proposal would reduce AIP, change how AIP is allocated, and increase the PFC available to commercial airports. (Key changes in the proposal’s many elements are outlined in appendix I.) Unlike previous reauthorization proposals, which made relatively modest changes in the structure of the AIP program, this proposal contains some fundamental changes in the funding and structure of the AIP program. Notably, following the pattern set by the 2000 FAA reauthorization, which required larger airports to return a certain percentage of their entitlement funding in exchange for an increase in the PFC, the administration proposes eliminating entitlement grants for larger airports altogether and at the same time allowing those airports to charge a higher PFC. The reauthorization proposal would eliminate some set-aside programs and increase the proportion of discretionary grant funds available to FAA at higher AIP funding levels. Table 4 compares AIP funding allocations under the current funding formulas to the proposed reauthorization allocations at both the current $3.5 billion level and at the proposed $2.75 billion level. Another change is to the entitlement formulas—for example, removing the funding trigger in current law that doubles the amount of entitlement funds airports receive if the overall AIP funding level is above $3.2 billion—is intended to make more discretionary funding available. According to FAA officials, their objective is to increase the amount of discretionary funding for airports so that higher priority projects can be funded; however, that is only achieved when total AIP funds are greater than the $2.75 billion budgeted by the administration. For example, at $2.75 billion in AIP, the current law would generate $967 million in discretionary grants versus $866 million under the proposed reauthorization. This reverses at $3.5 billion in AIP funding, for which the proposal generates $1.328 billion in discretionary grants versus $845 million under current law. The administration’s proposed reauthorization would allow airports to increase their PFC to a maximum of $6 and allow airports to use their collections for any airport projects while forgoing their entitlement funds. A $6 PFC could generate an additional $1.1 billion for larger airports that currently have a PFC in place, far exceeding the $247 million in entitlements that FAA estimates they would forego under this reauthorization proposal (see table 5). However, the impact on smaller airports is uncertain because they collect far less in PFCs and are more reliant on AIP for funding. A change to a $6 PFC would yield an additional $110 million for small hub airports based on airports that currently have a PFC in place and $132 million if every one of the small hub airports had a $6 PFC. It is uncertain whether the proposed allocation of AIP under the administration’s proposal would shift a greater proportion of funds to smaller airports to compensate for the overall reduction in AIP. The reauthorization proposal would also relax project eligibility criteria to allow airports to use their collections in the same way as they use internally generated revenue, including off-airport intermodal transportation projects. The application and review process would also be streamlined; as a result, FAA would no longer approve collections but rather ensure compliance with PFC and airport revenue rules. The administration’s proposal would modify the current pilot program on private ownership of airports in two key ways. First, the proposed modifications will expand eligibility beyond the current statutory limit of 5 to 15 airports. Restrictions limiting participation in the pilot program to specific airport size categories would also be eliminated. Second, the pilot program would be amended to eliminate the veto power that airlines can exercise under current law to prevent privatization transactions at commercial airports. Under current law, the sale of an airport to private interests may only proceed if a super-majority of the airlines at that airport approve of the sale or lease. Additionally, the airline veto power to prevent fee increases higher than inflation rates would be repealed. In place of these veto powers, the airport sponsor would need to demonstrate to the Secretary of Transportation that the airlines using that airport were consulted prior to the transaction proceeding. Congress established the Airport Privatization Pilot Program in October 1996 to determine if privatization could produce alternative sources of capital for airport development and provide benefits such as improvements in customer service. It also hoped to determine if new investment and capital from the private sector could be attracted through innovative financial arrangements. Proponents of privatization believe that the privatization of airports can lead to capacity-increasing investment in airports through the commitment of private capital, lower operating costs, and greater efficiency and that privatization can increase customer satisfaction. Overall, there has been relatively little interest in the current pilot program. Six airports have applied for participation in the program and three of those airports withdrew their applications in 2001. To date, Stewart International Airport, located in Newburgh, New York, is the only airport accepted into the pilot program. The airport received this exemption in March 2005, but is currently being purchased back by a public owner, the Port Authority of New York and New Jersey. In September 2006, the City of Chicago submitted a preliminary application for Chicago Midway International Airport. FAA completed its review of the Midway preliminary application and determined that it meets the procedural requirements for participation in the pilot program. Consequently, the City of Chicago can now proceed to select a private operator, negotiate an agreement, and submit a final application to FAA for exemption. In addition to concerns about the level and allocation of AIP funds, another concern is that the fuel tax revenues that the administration’s reauthorization proposal has designated to largely fund AIP after 2009 may not be as great as anticipated. Currently, AIP and other FAA programs are principally funded by the Airport and Airway Trust Fund (trust fund), which receives revenue from passenger ticket taxes and segment taxes, airline and general aviation fuel taxes, and other taxes. The administration’s reauthorization proposal would fund air traffic control through user fees for commercial aircraft and fuel taxes for general aviation while limiting the sources of revenue for the trust fund and its uses. Under the proposal, beginning in 2009, the trust fund would continue but only to fund three programs—AIP, Research, Engineering and Development (RE&D), and Essential Air Service (EAS)—and would be funded solely by an equal fuel tax on commercial and general aviation fuel purchases and an international arrival and departure tax. FAA officials confirmed for us that in estimating fuel tax revenues they did not take into account possible reductions in fuel purchases due to the increase in the tax rates. Although we do not know by how much such purchases would decline, conventional economic reasoning, supported by the opinions of industry stakeholders, suggests that some decline would take place. Therefore, the tax rate should be set taking into consideration effects on use and the resulting impact on revenue. FAA officials told us that they believe that these effects would be small because the increased tax burden is a small share of aircraft operating costs and therefore there was no need to take its impact into account. Representatives of general aviation, however, have said that the impact could be more substantial. If consumption possibly falls short of projections or Congress appropriates more funds for AIP, RE&D, or EAS than currently proposed, then fuel tax rates and the international arrival and departure tax would correspondingly have to be increased or additional funding from another source, such as the trust fund’s uncommitted balance or the General Fund, would be needed. In conclusion, Mr. Chairman, airports have rebounded financially from the September 2001 terrorist attacks. We expect the demand for air travel to continue to increase, the system capacity to be stretched, and airports to increase their demand for capital improvements to relieve congestion and improve their services. As Congress moves forward with reauthorizing FAA, it will have to decide on several key issues, including how it wants to fund and distribute grants under the AIP. While some elements of the administration’s proposal are to be commended—for example, simplifying the funding formulas and giving FAA more discretion to fund high priority projects—other parts of the proposal raise concerns. For example, the extent to which the administration’s proposed cuts in AIP funding will affect development at smaller airports is unclear. For further information on this statement, please contact Dr. Gerald Dillingham at (202) 512-2834 or DillinghamG@gao.gov. Individuals making key contributions to this testimony were Paul Aussendorf, Jay Cherlow, Jessica Evans, David Hooper, Nick Nadarski, Edward Laughlin, Minette Richardson, and Stan Stenersen. Trust fund for all capital programs are funded by an airline ticket tax, segment tax, international departure and arrival taxes, varying rates of fuel taxes and other taxes. Funding for AIP is appropriated from the trust fund. Trust fund is funded by fuel tax of 13.6 cents/gallon for commercial and general aviation and a reduced international arrival and departure tax. Funding for AIP is appropriated from the Trust Fund. If AIP is increased, the tax rates would have to be increased, the trust fund’s uncommitted balance would have to be drawn down, or another funding source would have to found. Up to 75 percent of entitlements for large and medium hub airports collecting a PFC are turned back to the small airport fund. Entitlements for large and medium hub airports eliminated by 2010. If AIP greater than $3.2 billion, primary airport entitlements are doubled. $3.2 billion trigger for doubling entitlements is eliminated except for small and nonhub primary airports. State apportionment is 20 percent of AIP (18.5 percent if AIP is less than $3.2 billion). State apportionment set at greater of 10 percent of AIP or $300 million. Nonprimary airport entitlement of up to $150,000. The nonprimary airport minimum entitlement of $150,000 per airport is eliminated and replaced by a tiered system of entitlements ranging from $400,000 for large general aviation airports to $100,000 for smaller general aviation airports. The 750 airports that have less than 10 operational and registered based aircraft are guaranteed nothing. Reliever and military airport set asides minimum discretionary funding set at $148 million. The set-aside for reliever and military airports is eliminated. Small airport fund funded by large and medium hub airport PFC turnbacks of up to 75 percent of PFC collections. Minimum discretionary funding set at $520 million. Small airport fund equal to 20 percent of discretionary funds. Most types of airfield projects, excluding interest costs, nonrevenue producing terminal space and on-airport access project costs. General aviation airports may use their entitlement funds for some revenue producing activities (e.g., hangars). Expanded to include additional revenue producing aeronautical support facilities (e.g., self-service fuel pumps) at general aviation airports. Government share set at 95 percent for smaller airports through 2007, and 75 percent for large and medium hub airports (noise 80 percent). Eliminates 95 percent government share except for the very smallest airports. Now maximum share will be a flexible amount with a maximum percentage of 90 percent. Airfield rehabilitation projects lowered to 50 percent maximum at large and medium hubs. Maximum rate is $4.50 per passenger. Maximum rate is $6 per passenger. All applications subject to FAA review. Review and approval is streamlined. PFCs can be used for all AIP eligible projects, but also interest costs on airport bonds, terminal gates and related areas, and noise mitigation can also be used. Eligibility expanded to include almost any airport –related project, including off-airport intermodal projects. Up to 10 large and medium hub airports willing to assume the cost of air navigation facilities are allowed a $7 PFC. Up to five airports, one of each size, with strict limit on rates and charges and requires approval by 65 percent of airlines. Up to 15 airports of any size, no limit on rates and charges and no airline veto, but subject to DOT review and approval. To determine how much planned development would cost over the next 5 years, we obtained planned development data from the Federal Aviation Administration (FAA) and Airports Council International-North America (ACI). To determine how much airports of various sizes are spending on capital development and from which sources, we sought data on airports’ capital funding because comprehensive airport spending data are limited and because, over time, funding and spending should roughly equate. We obtained capital funding data from the FAA, ACI, the National Association of State Aviation Officials (NASAO), and Thomson Financial—a firm that tracks all municipal bonds. We screened each of these databases for their accuracy to ensure that airports were correctly classified and compared funding streams across databases where possible. We did not, however, audit how the databases were compiled or test their overall accuracy, except in the case of state grant data from the NASAO and some of the Thomson Financial bond data, which we independently confirmed. We determined the data to be sufficiently reliable for our purposes. We subtotaled each funding stream by year and airport category and added other funding streams to determine the total funding. We met with FAA, bond rating agencies, bond underwriters, airport financial consultants, and airport and airline industry associations and discussed the data and our conclusions to verify their reasonableness and accuracy. To determine whether current funding is sufficient to meet planned development for the 5-year period from 2007—2011 for each airport category and overall, we compared total funding to planned development. We correlated each funding stream to each airports’ size, as measured by activity, and among other funding streams to better understand airports’ varying reliance on them and the relationships among sources of finance. We then discussed our findings with FAA, bond rating agencies, bond underwriters, airport financial consultants, and airport and airline industry associations to determine how our findings compared with their knowledge and experiences. To determine some of the potential effects from changes to how airport development is funded under the administration’s proposed FAA reauthorization legislation, we first analyzed the suggested changes to the Airport Improvement Program’s (AIP) funding and allocation. In particular we analyzed the effect of various funding levels on how the program funds would be allocated. Second, we evaluated the effects of raising the passenger facility charge (PFC) ceiling, as the administration proposal suggests, by estimating the potential PFC collections under a $6 PFC on the basis of 2005 enplanements and collection rates assuming all airports imposed a $6 PFC. Third, we determined the status of FAA’s pilot program for airport privatization. Moreover, we discussed the impact of all of the proposed changes (funding/allocation, $6 PFC, and privatization) with FAA, bond rating agencies, bond underwriters, airport financial consultants, and airport and airline industry associations. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
To address the strain on the aviation system, the Federal Aviation Administration (FAA) has proposed transitioning to the Next Generation Air Transportation System (NextGen). To finance this system and to make its costs to users more equitable, the administration has proposed fundamental changes in the way that FAA is financed. As part of the reauthorization, the administration proposes major changes in the way that grants through the Airport Improvement Program (AIP) are funded and allocated to the 3,400 airports in the national airport system. In response, GAO was asked for an update on current funding levels for airport development and the sufficiency of those levels to meet planned development costs. This testimony comprises capital development estimates made by FAA and Airports Council International (ACI), the chief industry association; analyzes how much airports have received for capital development and whether this is sufficient to meet future planned development; and summarizes the effects of proposed changes in funding for airport development. This testimony is based on ongoing GAO work. Airport funding and planned development data are drawn from the best available sources and have been assessed for their reliability. This testimony does not contain recommendations. ACI's estimate for planned development costs is considerably larger than FAA's, reflecting a broader range of projects included as well as differences in when and how the estimates are made. For 2007 through 2011, FAA estimated annual planned capital development costs at $8.2 billion, while ACI estimated annual costs at $15.6 billion. The estimates differ primarily because FAA's estimate only includes projects that are eligible for AIP grants, while ACI's covers all projects, including $5.8 billion for projects not eligible for federal funding, such as parking garages. From 2001 through 2005, airports received an average of about $13 billion a year for planned capital development. This amount covers all types of projects, including those not eligible for federal grants. The primary source of this funding was bonds, which averaged almost $6.5 billion per year, followed by federal grants and passenger facility charges (PFC), which accounted for $3.6 billion and $2.2 billion, respectively (see figure below). If airports continue to attract this level of funding for planned capital development, this amount would annually fall about $1 billion short of the $14 billion in total planned development costs (the sum of FAA's estimated $8.2 billion in eligible costs and the industry's $5.8 billion in ineligible costs). Larger airports foresee a shortfall of about $600 million annually, while smaller airports foresee a shortfall of $400 million annually. FAA's reauthorization proposal would reduce the size of AIP by $750 million but increase the amount that airports can collect from PFCs. However, the benefit from increased PFCs would accrue mostly to larger airports and may not offset a reduced AIP grants program for smaller airports. The proposal would also change the way that AIP and other FAA programs are funded. The new fuel taxes that FAA has proposed may not provide the revenues for AIP that FAA anticipates.
Community colleges serve almost 40 percent of undergraduate students in the United States. Because most community colleges have a commitment to open access admissions policies—allowing anyone to enroll in classes—their student populations often have varied needs. For example, community colleges have a long history of serving older and part-time students by offering affordable tuition, convenient locations, and flexible course schedules (see table 1). Among their many goals, community colleges aim to prepare students who will transfer to 4-year institutions, provide workforce development and skills training, and offer noncredit programs ranging from English as a second language to skills retraining. Upon enrollment, students typically take a placement test in reading, writing, and math so that community college administrators can assess their skill level. Depending on their performance on the test, students who are not considered college-ready in these subject areas are placed into developmental education courses. Based on their assessed skills, students could be placed in one developmental education course or several. If placed, these courses will add to the time it takes these students to complete their certificate or degree, and generally do not qualify for college credit. While developmental education is a category of coursework and not a specific federal program, community colleges use a variety of federal funding sources, such as federal grants, to help fund their programs. Additionally, many community college developmental education students access federal student aid to pay for these and other classes. Generally, a student enrolled in developmental education courses is eligible for federal student aid for up to 1 academic year’s worth of courses in a program leading to a degree, credential, or certificate at an eligible institution. Education, the federal agency that is responsible for overseeing programs authorized under the Higher Education Act of 1965, as amended (HEA) provided approximately $2.2 billion in the 2007-2008 academic year in federal student aid to community college students. In the 2007-2008 school year 36 percent of community college students who were enrolled in developmental education courses were receiving federal student aid.Education provides national statistics and conducts national research on various outcomes related to post-secondary education. All of the community college and state education officials we interviewed described strategies that they have used related to curriculum, placement, and working with high schools to help improve developmental education outcomes for students. Curriculum changes were focused on efforts to shorten the total amount of time students spend in developmental education and making developmental coursework relevant to a student’s career or academic area of study. Several officials and stakeholders with whom we spoke told us that, based on their experience, the longer students spend in developmental education, the less likely they are to move onto college-level classes. Additionally, these officials and stakeholders stated that they often observed that students who spent multiple semesters in developmental education dropped out, in large part because the students did not see the immediate benefits of the developmental coursework on their academic or career goals. Accelerating developmental coursework could also result in reduced financial costs for students, since they will potentially finish their coursework in a shorter period of time. Reducing the time spent in developmental education is particularly important given some of the recent changes to federal financial aid that shorten the amount of time certain aid is available to students.officials told us that initiatives related to better placement of students and working with high schools on preparing students can lead to less time in developmental education or perhaps prevent the need for it altogether. Lastly, most of the community college The following provides examples of strategies that are being implemented by some states and community colleges we visited: Shortening the time in developmental education: All of the states and community colleges we visited implemented a number of initiatives to shorten the amount of time students spent in developmental education. Nearly all of the community colleges we visited implemented initiatives that broke up the developmental education classes into smaller, shorter component modules that would otherwise last a full term. Virginia officials described their statewide developmental math redesign as one that segmented classes into one-credit modules requiring students to take only those modules that they needed based on the results of an assessment. Prior to the redesign, a single developmental education math course carried a credit load of four or five credits, which could have led to students taking coursework they did not need over a longer period of time. Another initiative used to shorten the amount of time students spend in developmental education, involved compressing the developmental curriculum to allow students to complete more than one class in a single term. For example, two community colleges we visited offered fast track math classes that allowed students to complete two classes in one semester. Additionally, officials in two states we visited told us that they had implemented a statewide curriculum combining developmental education reading and writing coursework, thus reducing a two-class requirement to one during a term. Lastly, several community colleges we visited had reexamined the developmental education content needed to prepare students for college-level classes in order to reduce the number of required courses. For example, one community college did this by eliminating the overlap between developmental math classes and the college-level classes, which led them to reduce the number of developmental education math classes in the sequence from five classes to three. Making coursework applicable to academic or career goals: One state and most of the community colleges we visited were making their developmental education coursework more applicable to students’ academic or career goals so that students could see the relevance of the developmental course content immediately, while also earning college credits. All of the community colleges we visited in Washington are integrating developmental education instruction into their college-level classes. Washington’s Integrated Basic Education and Skills Training (I-BEST) program places students directly into career and technical or college-level academic classes with two instructors: one to teach the subject matter and the other to teach developmental education in the context of the class. Another initiative used by a few of the community colleges we visited to make the coursework more relevant to students’ goals involved linking college- level and developmental education classes. For example, in one community college we visited, students can enroll in a college-level history or psychology class while concurrently taking a developmental reading class that integrates the content of the college-level class into its coursework. Lastly, several of the community colleges we visited offered alternative pathways for developmental math students because traditional developmental math prepares students for higher levels of college math they may not need for certain fields. In one community college, the developmental math coursework has several pathways for students: students in Science, Technology, Engineering, or Mathematics (STEM) fields could take a path that leads them to the types of math they need for their field, such as calculus, while students in the social sciences or liberal arts could take a path that leads them to different types of statistics courses that may be more relevant to their fields of study. Rethinking Placement: Community colleges we visited are changing how students are placed into developmental courses so that students might spend less time taking such courses. Several of the community colleges we visited are providing preparatory classes or online test preparation software to better prepare students and sharpen their skills for the placement test. A few community college officials told us that students may need only a quick refresher on material they have already mastered but may not have used in some time. With the refresher course, students could place into a higher-level developmental education class or be placed directly into college-level courses. Several officials told us that these refreshers provided by preparatory classes or online test preparation are especially helpful for students who have been out of an academic setting for an extended amount of time. Additionally, several community colleges we spoke with are also considering a student’s high school grades or grade point average when determining placement. For example, one community college we visited reviews students’ transcripts and uses students’ grades in specified math classes at local high schools—or the results of their placement test, whichever was higher—to determine their direct placement into a developmental or college-level math class. Preventing the need for developmental education: Most of the community colleges we visited partnered with local K-12 schools to align their curriculums to help ensure that students graduating from the local high schools were ready for college. For example, one Texas community college established vertical teams that brought together high school and community college faculty in science, math, and social studies to discuss students’ academic needs. In another example, Washington state officials told us that, starting in 2015, the state plans to offer a college assessment test in the 11th grade to identify and provide additional instruction to students who may have remediation needs so that when these students graduate, they will be ready for college. Researchers are reviewing some of the initiatives that community colleges are instituting to improve outcomes for developmental education students, but the evidence base is limited. One program that is showing early promise is Washington’s I-BEST. In a study conducted by the Community College Research Center (CCRC), an independent research organization housed at Columbia University’s Teachers College, I-BEST was regarded as an effective model for increasing the rate at which students enter and succeed in postsecondary career education overall. Additionally, a few community college officials told us they are planning to conduct evaluations of their initiatives in the future to understand the outcomes of their activities. However, according to a few stakeholders and a community college official we spoke with, there is limited information available on a national basis for community colleges to have confidence in the impacts of their initiatives. Most of the community colleges and other stakeholders with whom we spoke stated that more research is needed to determine if developmental education initiatives work. (See fig. 1.) Some stakeholders told us that additional research is needed to help community college officials understand the context in which community colleges and states are using developmental education models and how they are resolving issues, such as helping students transition into regular credit-bearing courses more quickly. Community college officials also expressed concerns with promoting some strategies without fully understanding the long- term outcomes, particularly on certain populations. For example, a few community college officials worried about the impact of using accelerated developmental education classes. These officials were concerned that the fast-paced nature of an accelerated program would increase a student’s risk of not completing a course or program. For example, part-time students enrolled in an accelerated program may have additional stress when trying to balance personal responsibilities, such as child care or work demands, while enrolled in an accelerated course and may end up dropping out of the college altogether. Additionally, another official told us that knowing how to scale up pilot initiatives was a challenge because initiatives that were successful with one population of students may not be successful with other students. Obtaining faculty support for unproven reforms was also cited by several community college and state officials as a challenge. Officials at one community college told us that it was difficult for staff to buy into changes to developmental education at their community college because there was not much training provided and initiatives were unproven. A literature review conducted by a stakeholder organization on acceleration strategies, for example, noted that faculty may resist working on reforms and that there is limited research to help “quell the skepticism.” Recent literature also suggests that faculty support is a key factor to bringing effective practices to scale. Officials at one community college explained that new models of learning can be a radical change for some faculty and many find it difficult to change their teaching styles to adapt to the unproven curriculum. Officials at this community college also told us that some faculty members at their college are resistant and skeptical because they may have different philosophical views about how courses should be taught. To address these issues, officials in one state we visited created a task force that included community college and K-12 representatives and sought input from faculty, students, and staff. Additionally, they relied on the limited research available to help guide their discussions with faculty and make decisions about the redesign, all of which helped move the statewide redesign forward with little resistance. The Department of Education is taking steps to address some of the challenges cited by community colleges and states in improving developmental education by funding a new research center on this topic. Education officials confirmed that not enough information was available about successful developmental education strategies. Education officials further explained that most initiatives did not yet have sufficient data—2 years worth of data or less—to determine what worked. In its Annual Performance Plan for Fiscal Year 2013, Education stated that one of its goals is enhancing the U.S. education system’s ability to continuously improve through better and more widespread use of data, research and evaluation, transparency, innovation, and technology. In light of this goal and to help further community colleges’ understanding of what works in developmental education, in May 2013, Education requested proposals for a National Research Center on Developmental Education Assessment and Instruction. Education plans for this research center to focus exclusively on developmental education assessment and instruction in order to help policy makers and practitioners improve student outcomes. The goals of the research center are (1) to convene policy makers, practitioners, and researchers interested in developmental education reform; (2) to identify promising reforms and support further innovations; (3) to conduct rigorous evaluations on the effectiveness and cost- effectiveness of models that have the potential to be expanded; and (4) to bolster efforts by states, colleges, and universities to bring effective developmental education reforms to scale. An Education official stated that the Department, through the Center’s research, will first collect a nationwide inventory on what approaches are being used and then evaluate different approaches to teaching developmental education. The Center could address the research needs cited by community college and state officials to improve developmental education and help administrators with obtaining faculty buy-in. The research center is expected to launch in 2014. Meeting the national goal of increasing the rates of attainment of post- secondary degrees and certificates may be hampered by the significant numbers of students who enter developmental education and fail to move toward that outcome. Community colleges and states are initiating new strategies to address this problem, but the limited research available to them on what strategies work and for whom is proving challenging. Education’s research center will serve as a much needed resource for community colleges and states as they continue to experiment with new strategies, but only if it is successful in uncovering what works and helping colleges to put into practice what the Center learns through its research. Otherwise, community college students entering developmental education will continue to face hurdles in reaching their goals. We provided a draft of this report to the Department of Education for review and comment. Education provided technical comments, which we incorporated into the report as appropriate. We are sending copies of this report to appropriate congressional committees, the Secretary of Education, and other interested parties. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (617) 788-0534 or emreyarrasm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. The objectives of this report were to determine (1) what strategies select states and community colleges are using to improve developmental education for community college students and (2) what challenges, if any, community colleges have identified while implementing these developmental education strategies. To address the first objective, we interviewed nonprofit stakeholders with knowledge of community college issues. We conducted site visits to Texas, Virginia, and Washington, which had been identified by experts as doing innovative work in improving developmental education. They also represent regional diversity. While on those site visits, we interviewed officials from 10 community colleges as well as representatives from each state’s education office. We also visited a community college in California, which brought the total number of schools to 11. These 11 community colleges were identified by state officials and our own research as colleges that were implementing changes to developmental education. Since school officials were selected based on their school’s participation in developmental education reform efforts, the high incidence of these initiatives among the interviewed schools should not be interpreted as an indicator of the incidence of such programs among community colleges broadly. In addition, we conducted a group interview with community college officials and other knowledgeable stakeholders—who were identified by the conference sponsors as being knowledgeable about developmental education—at a national conference focused on reforming community college student outcomes. (See table 2 for a full list of stakeholders, state offices, and community colleges we interviewed individually and as part of our group interview.) Additionally, we reviewed selected literature on the topic. To address the second objective, in addition to the information gathered in the interviews and literature review addressed above, we interviewed officials at the Department of Education. The officials were from the following offices within the Department of Education: the Office of Vocational and Adult Education; the Office of Federal Student Aid; the National Center for Education Statistics; and the Office of Planning, Evaluation, and Policy Development. Additionally, we reviewed pertinent agency documents, including budget proposals, Requests for Application, and a list of Education’s current initiatives for community colleges, as well as relevant laws, regulations, and guidance. Given that we were examining strategies of a few selected states and schools, we do not intend for the options and challenges identified by the stakeholders, state, or community college officials we interviewed to be an exhaustive list. In addition, we did not assess or evaluate the initiatives that were proposed to improve developmental education, nor do we necessarily recommend any such initiatives. We use indefinite quantifiers when describing the number of stakeholders or community colleges whose representatives mentioned the topic referenced in the respective sentence. In using the indefinite quantifiers, we are only including the 11 community colleges we visited directly as part of our site visits and the 11 stakeholder organizations whose representatives we spoke with individually. The community colleges or stakeholders referenced in the indefinite quantifiers were not part of our group interview. The indefinite qualifiers categories are listed in table 3: We conducted this performance audit from August 2012 to August 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual above, Janet Mascia (Assistant Director), David Reed, Vernette Shaw, and Anjali Tekchandani made significant contributions to this report. Kirsten Lauber, Jessica Botsford, Deborah Bland, and Holly Dye also contributed to this report.
Education reported that approximately 42 percent of entering community college students were not sufficiently prepared for college-level courses and enrolled in at least one developmental education course. Researchers also estimate that fewer than 25 percent of developmental education students will complete a degree or certificate. Improving developmental education is key to increasing degree and certificate completion. Some community colleges and states are instituting various initiatives to improve the outcomes of students placed into developmental education. GAO was asked to examine current developmental education efforts. This report addresses the following questions: (1) What strategies are selected states and community colleges using to improve developmental education for community college students; and (2) what challenges, if any, have community colleges identified while implementing these developmental education strategies? GAO conducted site visits to community colleges and state education offices in Texas, Virginia, and Washington, which were identified by experts and the literature as states initiating innovative changes in developmental education coursework. GAO interviewed Education officials, as well as stakeholders from non-profit and research organizations focused on community college issues. In addition, GAO reviewed relevant laws, regulations, and guidance. States and community colleges GAO visited have implemented several strategies to improve developmental education--which is remedial coursework in math, reading, or writing for students who are assessed not to be ready for college-level classes. Many initiatives involved shortening the amount of time for developmental education and better targeting material to an individual student's needs. For example, two community colleges have implemented fast track classes that enable students to take two classes in one semester instead of in two semesters. One developmental education program in Washington places students directly into college level classes that also teach developmental education as part of the class. Community colleges are also using tools such as test preparatory classes to help students prepare for placement tests that determine if they will need to take developmental education courses. According to community college officials GAO spoke with, these classes help familiarize students with prior coursework and, in some cases, help them place directly into college level courses. Additionally, most community colleges GAO visited have worked to align their curriculum with local high schools so that graduating seniors are ready for college. Little research has been published on these developmental education initiatives and whether they are leading to successful outcomes. Most community college officials with whom GAO spoke noted that the limited availability of research in this area is a challenge to implementing strategies to improve developmental education programs. Specifically, they noted that it is difficult to determine whether new programs are working, and to gain faculty support for unproven models of teaching. Department of Education (Education) officials confirmed that research regarding successful developmental education strategies is insufficient. In response, Education has announced the availability of grant funds for a National Research Center on Developmental Education Assessment and Instruction. The Center will focus exclusively on developmental education assessment and instruction to inform policymakers and instructors on improving student outcomes. The Center is expected to launch in 2014. GAO is making no recommendations in this report.
In support of the President’s annual budget request for VA health care services, which includes a request for advance appropriations, VA develops a budget estimate of the resources needed to provide such services for 2 fiscal years. Typically, VHA starts to develop a health care budget estimate approximately 10 months before the President submits the budget request to Congress in February. This is approximately 18 months before the start of the fiscal year to which the request relates and about 30 months prior to the start of the fiscal year to which the advance appropriations request relates. VA’s health care budget estimate includes estimates of the total cost of providing health care services as well as costs associated with management, administration, and maintenance of facilities. VA develops most of its budget estimate for health care services using the Enrollee Health Care Projection Model. VA uses other methods to estimate needed resources for long-term care, other services, and health-care-related initiatives proposed by the Secretary of Veterans Affairs or the President. After determining the amount of VA’s appropriations, Congress provides VA resources for health care through three accounts: Medical Services, which funds health care services provided to eligible veterans and beneficiaries in VA’s medical centers, outpatient clinic facilities, contract hospitals, state homes, and outpatient programs on a fee basis; Medical Facilities, which funds the operation and maintenance of the VA health care system’s capital infrastructure, including costs associated with NRM and non-NRM activities, such as utilities, facility repair, laundry services, and grounds keeping; and Medical Support and Compliance, which funds the management and administration of the VA health care system, including financial management, human resources, and logistics. VA allocates most of its health care resources for these three accounts through VERA—a national, formula-driven system—at the beginning of each fiscal year and allocates additional resources throughout the year. VA allocates about 80 percent of the health care appropriations to its 21 health care networks through VERA. VA uses methods other than VERA to allocate the remaining resources to networks and medical centers for such programs as prosthetics, homeless grants, and state nursing homes. VA may also use methods other than VERA to allocate any additional resources it may receive from Congress during the year. The networks in turn allocate resources received through VERA and other methods to their respective medical facilities, as part of their role in overseeing all medical facilities within their networks. In addition to amounts allocated to networks and medical facilities at the beginning of the fiscal year, VA also sets aside resources from each of VA’s three health care appropriations accounts—in what is known as a national reserve—so that resources are available for contingencies that may arise during the year. In general, VA allocates resources from the national reserve to match network spending needs for each appropriations account. Within each appropriations account, VA also has flexibility as to how the resources are used. For example, within the Medical Services account, VA has the authority to use resources for outpatient services instead of hospital services, should the demand for hospital services be lower than expected and demand for outpatient services be higher. In a similar manner, VA has the authority to use resources in the Medical Facilities account for NRM instead of non-NRM activities—such as utilities—should spending for those activities be less than estimated. In June 2012, we reported that VA’s NRM spending has consistently exceeded the estimates reported in VA’s budget justifications from fiscal years 2006 to 2011. This pattern continued in fiscal year 2012 when VA spent about $1.5 billion for NRM, which was $622 million more than estimated. (See fig. 1.) To help inform its budget estimates for NRM, VA collects information on facility repair and maintenance needs as part of an ongoing process to evaluate the condition of its medical facilities. VA conducts facility condition assessments (FCA) at each of its medical facilities at least once every 3 years. VA uses contractors to conduct FCAs, and these contractors are responsible for inspecting all major systems (e.g., structural, mechanical, plumbing, and others) and assigning each a grade of A (for a system in like-new condition) through F (for a system in critical condition that requires immediate attention). As part of this assessment, the contractors use an industry cost database to estimate the correction costs for each system graded D or F. According to VA officials, the agency’s reported NRM backlog represents the total cost of correcting these FCA-identified deficiencies. Our analysis of data for fiscal years 2006 through 2012 found that in each of these years VA had higher than estimated resources available in its Medical Facilities account, which VA used to increase NRM spending by about $4.9 billion. These resources derived from two sources: (1) lower than estimated non-NRM spending, which made more resources available for NRM, and (2) higher than estimated budget resources, which included annual appropriations, supplemental appropriations, reimbursements, transfers, and unobligated balances. As figure 2 shows, after fiscal year 2008, lower than estimated spending on non- NRM activities accounted for most of VA’s spending on NRM that exceeded VA’s budget estimates. Lower than estimated non-NRM spending. VA spent fewer resources from the Medical Facilities account on non-NRM activities than it estimated, which allowed the agency to spend over $2.5 billion more on NRM than it originally estimated in fiscal years 2009 through 2012. When we asked why VA spent more on NRM projects than estimated, VA officials said one reason was that the agency spent less than it estimated on non-NRM activities and that the most practical use of these unspent resources was to increase spending on NRM because of the large backlog of FCA-identified deficiencies. VA officials further explained that VA spent less for non-NRM activities than anticipated because of a decrease in the demand for utilities and other weather-dependent non- NRM activities because of mild weather patterns during the last four winters. However, lower spending on these weather-dependent activities only accounted for $460 million—18 percent—of the resources eventually used for NRM. The remaining 82 percent eventually used for NRM came from resources originally intended to be used for various other activities, including administrative functions and rent. VA has consistently overestimated spending for these non-NRM activities, and if the agency continues to determine estimates for such activities in the same way, its future budget estimates of spending for non-NRM may not be reliable. Higher than estimated budget resources. VA had more budget resources available in its Medical Facilities account than the agency estimated it would have, and this allowed VA to spend over $2.3 billion more on NRM than it originally estimated. When we asked why VA spent more on NRM projects than estimated, VA officials said that in addition to spending less on non-NRM activities the agency also received higher annual appropriations than requested and unanticipated supplemental appropriations from Congress. For example, in fiscal year 2009 VA received $300 million more than it requested in annual appropriations as well as $1 billion in supplemental appropriations included in the American Recovery and Reinvestment Act of 2009 (Recovery Act). VA also received $550 million in supplemental appropriations as part of the U.S. Troop Readiness, Veterans’ Care, Katrina Recovery, and Iraq Accountability Appropriations Act, 2007.appropriations were specifically for NRM. In addition to higher annual appropriations and supplemental appropriations, we also found that VA used other budget resources to increase NRM spending. The budget resources included transfers from VA’s other appropriations accounts, reimbursements for services provided under service agreements with the Department of Defense, and unobligated balances carried over from prior fiscal years. While, according to VA officials, the agency did not track the use of specific resources used to increase NRM spending, data provided by VA suggests that more than $1.8 billion from higher than requested appropriations and about $762 million from other budget resources were available for this spending in fiscal years 2006 through 2012. VA used VERA to perform an initial allocation of resources for NRM at the beginning of each fiscal year for fiscal years 2006 through 2012, allocating a total of about $4.6 billion over this time frame. In addition, for fiscal years 2009 through 2012, VA allocated $2.9 billion in total for NRM from higher than requested appropriations and its reserve account. Figure 3 shows the nearly $7.5 billion allocated for NRM using VERA and other methods, which included network estimated costs to maintain medical facilities in good working condition—that is for sustainment—and costs to address the NRM backlog of FCA-identified deficiencies for fiscal years 2006 through 2012. Over the course of allocating about $4.6 billion for NRM using VERA between fiscal years 2006 and 2012, VA changed VERA’s NRM allocation formula from one being based primarily on patient workload in the networks to one that primarily considers both sustainment of buildings and the NRM backlog in each network. Prior to fiscal year 2009, VA used VERA to allocate nearly $1.5 billion of NRM resources on the basis of patient workload and adjusting the cost of construction. Under this formula, networks that treated the largest number of patients received the most resources for NRM, according to VA officials. Beginning in fiscal year 2009, VA used VERA to allocate resources for NRM primarily based on each network’s estimated cost for sustainment and the cost for addressing the NRM backlog. VA used VERA to allocate about $2.6 billion from fiscal year 2009 through 2012—which according to VA officials resulted in more resources being allocated to networks with a higher proportion of more expensive building space. In addition, since fiscal year 2009, VA has also used VERA to allocate about $512 million to NRM projects that improve access or provide accommodations for specific health care services, such as research, women’s health, and mental health, and certain other NRM projects. In fiscal years 2009 and 2010, VA allocated more than $1.4 billion of higher than requested Medical Facilities appropriations using methods other than VERA. Over the course of both fiscal years, VA allocated $1 billion of supplemental appropriations included in the Recovery Act mostly on the basis of each network’s estimated cost of addressing FCA- identified deficiencies, according to VA officials. In both fiscal years, Congress also provided higher appropriations in the Medical Facilities account than the President requested, in part, to fund additional NRM projects. In providing these higher appropriations for NRM, Congress required VA to allocate a specific amount using methods other than VERA. In fiscal year 2009, VA allocated $300 million based on each network’s estimated cost of addressing the NRM backlog of FCA- identified deficiencies. the basis of networks’ estimated sustainment costs and their cost to address the NRM backlog. See Pub. L. No. 110-329, div. E, tit. II, 122 Stat. 3574, 3705 (2008). the backlog of FCA-identified deficiencies.Medical Facilities appropriations in the reserve that are not used for non- NRM purposes are available for NRM. The Under Secretary for Health determines whether funds in the reserve are available and recommends allocations of those funds to the Secretary of Veterans Affairs, who approves the allocations. VA officials explained that the allocation of funds from the reserve for NRM were typically based on sustainment costs as well as the cost of addressing FCA-identified deficiencies and other VA NRM priorities, such as VA’s energy investment “Green” initiatives. These allocations are also subject to the networks’ ability to award the projects and obligate the additional funds prior to their expiration. In anticipation of the availability of such resources, networks typically identify in advance projects that can be implemented if additional funds become available, according to VA officials. Officials explained further that the networks do this to better address the NRM backlog. VA relied on its networks to prioritize all NRM spending until centralizing this process for more costly projects in fiscal year 2012. NRM projects VA funded were generally consistent with VA priorities. For fiscal years 2006 through 2011, VA relied on its networks to prioritize projects for NRM spending. Each fiscal year, networks provided VA headquarters with a list of prioritized NRM projects, known as NRM operating plans. According to officials from headquarters and the two selected networks, NRM operating plans represented all of the NRM projects that a network plans to fund and carry out in a given year. VA officials told us that to prioritize NRM projects during this period, the networks used oral guidance communicated to the networks during management meetings with VA headquarters that encouraged the networks to prioritize projects addressing critical FCA-identified deficiencies and sustainment. Beginning in fiscal year 2012, VA changed its process for prioritizing more costly NRM projects. Specifically, VA headquarters assumed responsibility for prioritizing these NRM projects as part of VA’s newly established Strategic Capital Investment Planning process, known as SCIP. Through SCIP, VA headquarters evaluates these more costly NRM projects and other types of capital investment projects using a set of weighted criteria in order to develop a list of prioritized projects to guide the agency’s capital planning decisions.threshold for including NRM projects in this centralized prioritization process was $1 million. VA used this process to identify 190 projects as the agency’s highest NRM priorities for fiscal year 2012. Under SCIP, VA prioritizes NRM projects based on the extent to which they meet the following six criteria: For fiscal year 2012, the 1. improve the safety and security of VA facilities by mitigating potential damage to buildings facing the risk of damage from natural disaster, improving compliance with safety and security laws and regulations, and ensuring that VA can provide service in the wake of a catastrophic event; 2. address selected key major initiatives and supporting initiatives identified in VA’s strategic plan;3. address existing deficiencies in its facilities that negatively affect the delivery of services and benefits to veterans; 4. reduce the time and distance a veteran has to travel to receive services and benefits, increase the number of veterans utilizing VA’s services, and improve the services provided; 5. right-size VA’s inventory by building new space, converting underutilized space, or reducing excess space; and 6. ensure cost-effectiveness and the reduction of operating costs for new capital investments. While VA uses SCIP to prioritize more costly NRM projects, the networks remain responsible for prioritizing all other or “below-threshold” NRM projects. However, VA has not provided its networks with written policies on how to prioritize these projects. According to a VA official, in fiscal year 2012, below-threshold projects accounted for over $625 million or 42 percent of VA’s NRM spending. According to officials, instead of providing written guidance, VA officials have orally encouraged the networks to apply the same criteria included in SCIP when prioritizing below-threshold NRM projects. VA’s lack of written policies for prioritizing below-threshold NRM projects which specify that is inconsistent with federal internal control standards,agency policies should be documented and that all documentation should be properly managed and maintained. Without written policies that clearly document VA’s guidance to networks for prioritizing these less costly NRM projects, there is an increased risk that networks may not apply, or may inconsistently apply, the criteria included in SCIP. Our review of VA data shows that for fiscal years 2006 through 2011 the majority of the NRM projects that were funded by the networks were projects that the networks had prioritized in their operating plans. Specifically, in each year during this period, at least 85 percent of the NRM projects the networks funded were listed in the networks’ operating plans. For example, of the 2,905 NRM projects that networks funded in fiscal year 2011, over 2,400 projects (85 percent) were listed on the operating plans. When asked about funded projects that were not listed on networks’ operating plans, VA officials told us that networks may fund NRM projects in response to emerging needs during the course of the year. For fiscal year 2012, our analysis of VA data also shows that NRM projects funded that year were generally consistent with projects prioritized using SCIP and those prioritized by the networks in their operating plans. Specifically, in fiscal year 2012, 189 NRM projects that were prioritized through the SCIP process received funding. Moreover, as figure 4 shows, of the 1,909 NRM projects that were funded by the networks outside of the SCIP process in fiscal year 2012, 1,668 (87 percent) were listed on the networks’ 2012 operating plans. This consistency notwithstanding, because VA has not provided its networks with written policies for prioritizing below-threshold projects, the agency faces an ongoing risk that NRM projects could be funded in a manner inconsistent with the SCIP criteria. Officials at VA headquarters have taken several steps in recent years to better monitor NRM spending to ensure that funded projects were consistent with the agency’s priorities. In fiscal years 2009 and 2010, in compliance with congressional requirements, VA tracked and reported spending on NRM projects that used funding provided through the Recovery Act. Recognizing the value of such monitoring, VA headquarters officials decided to expand efforts tracking NRM spending by project on a monthly basis. Since fiscal year 2011, VA has used what it calls its capital assets database to manage and monitor NRM spending on a monthly basis. As part of these efforts, VA has instructed its project managers to update the information on each project on a monthly basis and review tracking reports to ensure that spending for each project is within its estimated cost. VA officials told us that there are new efforts under way to improve the data reliability of the capital assets database and to incorporate its tracking reports into the SCIP process. Our work shows that VA has consistently spent more on NRM than estimated because of the availability of higher than estimated resources in its Medical Facilities account. These additional resources derived from lower than estimated spending for non-NRM activities and higher than requested appropriations. Further, our work shows that spending for administrative functions, utilities, and rent accounted for most of the lower than estimated non-NRM spending in recent years. Thus, given the underestimates for these activities, VA’s future budget estimates for non- NRM activities in its budget justification may not be reliable if the agency continues to determine its estimates in the same way. VA has taken important steps in establishing a centralized process for prioritizing more costly NRM projects through SCIP, and during the period we reviewed, VA’s funded NRM projects were generally consistent with agency priorities. However, VA does not have reasonable assurance that spending on NRM will be consistent with criteria included in SCIP. Our work shows that while networks remain responsible for prioritizing below- threshold NRM projects, VA has not provided its networks with written policies for prioritizing these less costly NRM projects. Spending on these projects is not insignificant: in fiscal year 2012, spending on projects below the threshold was over $625 million or 42 percent of VA’s spending on NRM. Without written policies that clearly document VA’s guidance to networks for prioritizing below-threshold NRM projects, VA faces a continued risk that its networks may not apply, or may inconsistently apply, the criteria included in SCIP when funding these projects. We recommend the Secretary of Veterans Affairs take the following actions: to improve the reliability of information presented in VA’s congressional budget justifications that support the President’s budget request for VA health care, determine why recent justifications have overestimated spending for non-NRM activities and incorporate the results to improve future budget estimates for such activities; and to provide reasonable assurance that VA’s networks prioritize NRM spending consistent with VA’s overall NRM priorities, establish written policies for its networks for applying SCIP criteria when prioritizing the funding of NRM projects that are below the threshold for inclusion in VA’s centralized prioritization process. We provided a draft of this report to the Secretary of Veterans Affairs for comment. In the agency’s comments—reprinted in appendix I—VA concurred with both of our recommendations. In concurring with our first recommendation regarding improvements needed in its estimates for non-NRM activities, VA noted that the budget formulation process has been modified to include a better synchronization of events that play a significant role in the overestimated spending for non-NRM activities. VA stated that this modification has been incorporated in the fiscal year 2014 President’s budget. In concurring with our second recommendation regarding written guidance on the application of SCIP criteria to prioritization of below-threshold NRM projects, VA noted that the NRM handbook and related guidance will be updated to direct facilities and networks to apply SCIP criteria when prioritizing below-threshold NRM projects. In addition, networks’ Capital Asset Managers, who are responsible for monitoring and evaluating each network’s NRM program, will be required to review below-threshold NRM projects included in a network’s operating plan. VHA’s Office of Capital Asset Management Engineering and Support will also review networks’ operating plans to ensure compliance. We are sending copies of this report to the Secretary of Veterans Affairs and appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, James C. Musselwhite, Assistant Director; Krister Friday; Aaron Holling; Lisa Motley; and Said Sariolghalam made key contributions to this report. Veterans’ Health Care Budget: Better Labeling of Services and More Detailed Information Could Improve the Congressional Budget Justification. GAO-12-908. Washington, D.C.: September 18, 2012. Veterans’ Health Care Budget: Transparency and Reliability of Some Estimates Supporting President’s Request Could Be Improved. GAO-12-689. Washington, D.C.: June 11, 2012. VA Health Care: Estimates of Available Resources Compared with Actual Amounts. GAO-12-383R. Washington, D.C.: March 20, 2012. Veterans Affairs: Issues Related to Real Property Realignment and Future Health Care Costs. GAO-11-877T. Washington, D.C.: July 27, 2011. Veterans’ Health Care Budget: Changes Were Made in Developing the President’s Budget Request for Fiscal Years 2012 and 2013. GAO-11-622. Washington, D.C.: June 14, 2011. VA Health Care: Need for More Transparency in New Resource Allocation Process and for Written Policies on Monitoring Resources. GAO-11-426. Washington, D.C.: April 29, 2011. VA Real Property: Realignment Progressing, but Greater Transparency about Future Priorities Is Needed. GAO-11-521T. Washington, D.C.: April 5, 2011. VA Real Property: Realignment Progressing, but Greater Transparency about Future Priorities Is Needed. GAO-11-197. Washington, D.C.: January 31, 2011. Veterans’ Health Care: VA Uses a Projection Model to Develop Most of Its Budget Estimate to Inform President’s Budget Request. GAO-11-205. Washington, D.C.: January 28, 2011. VA Health Care: Overview of VA’s Capital Asset Management. GAO-09-686T. Washington, D.C.: June 9, 2009. VA Health Care: Challenges in Budget Formulation and Issues Surrounding the Proposal for Advance Appropriations. GAO-09-664T. Washington, D.C.: April 29, 2009. VA Health Care: Challenges in Budget Formulation and Execution. GAO-09-459T. Washington, D.C.: March 12, 2009. VA Health Care: Long-Term Care Strategic Planning and Budgeting Need Improvement. GAO-09-145. Washington, D.C.: January 23, 2009. VA Health Care: VA Should Better Monitor Implementation and Impact of Capital Asset Alignment Decisions. GAO-07-408. Washington, D.C.: March 21, 2007. VA Health Care: Budget Formulation and Reporting on Budget Execution Need Improvement. GAO-06-958. Washington, D.C.: September 20, 2006. VA Health Care: Preliminary Findings on the Department of Veterans Affairs Health Care Budget Formulation for Fiscal Years 2005 and 2006. GAO-06-430R. Washington, D.C.: February 6, 2006.
VA operates about 1,000 medical facilities--such as hospitals and outpatient clinics--that provide services to more than 6 million patients annually. The operation and maintenance of its facilities, including NRM, is funded from VA's Medical Facilities appropriations account, one of three accounts through which Congress provides resources for VA health care services. In prior work, GAO found that VA's spending on NRM has consistently exceeded its estimates. GAO recommended that VA ensure that its NRM estimates fully account for this long-standing pattern, and VA agreed to implement this recommendation. GAO was asked to conduct additional work on NRM spending. In this report, GAO examines, for fiscal years 2006 through 2012, (1) what accounted for the pattern of NRM spending exceeding VA's budget estimates; (2) VA's allocation of resources for NRM to its health care networks; and (3) VA's process for prioritizing NRM spending and the extent to which NRM spending was consistent with these priorities. GAO reviewed VA's budget justifications and VA data and interviewed officials from headquarters and selected networks. During fiscal years 2006 through 2012, the Department of Veterans Affairs (VA) had higher than estimated resources available for facility maintenance and improvement--referred to as non-recurring maintenance (NRM); these resources accounted for the $4.9 billion in VA's NRM spending that exceeded budget estimates. The additional resources came from two sources. First, VA spent less than it estimated on non-NRM, facility-related activities such as administrative functions, utilities, and rent, which allowed VA to spend over $2.5 billion more than originally estimated. Lower spending for administrative functions, utilities, and rent accounted for most of the resources estimated but not spent on non-NRM activities. Given that VA has consistently overestimated the costs of such activities in recent years, VA's budget estimates for its non-NRM activities may not be reliable. Second, more than $2.3 billion of the higher than estimated spending on NRM can be attributed to VA having higher than estimated budget resources available. In some years VA received higher appropriations from Congress than requested and supplemental appropriations for NRM--such as those included in the American Recovery and Reinvestment Act of 2009. The additional budget resources VA used for NRM also included transfers of funds from the agency's appropriations account that funds health care services. VA allocated about $7.5 billion in resources for NRM to its 21 health care networks from fiscal year 2006 through fiscal year 2012. VA allocated about $4.6 billion of these resources at the beginning of each fiscal year through the Veterans Equitable Resource Allocation--its national, formula-driven system. In addition, VA allocated $2.9 billion during this period from higher than requested annual appropriations and its national reserve account, which is maintained to address contingencies that may develop each fiscal year. In anticipation of such resources, networks typically identify projects that can be implemented if additional funds become available. VA officials told us that they do this to better address the backlog of identified building deficiencies most recently estimated to cost over $9 billion. To prioritize NRM spending more centrally, VA established a new process for projects above a minimum threshold, and from fiscal years 2006 through 2012 spending on NRM was generally consistent with VA priorities. Prior to fiscal year 2012, VA provided oral guidance to networks for prioritizing NRM spending and relied on its 21 health care networks to prioritize NRM projects to maintain medical facilities in good working condition and address deficiencies. Beginning in fiscal year 2012, as part of VA's Strategic Capital Investment Planning (SCIP) process, VA headquarters assumed responsibility for prioritizing more costly NRM projects using a set of weighted criteria. For fiscal year 2012, the threshold for NRM projects to be included in this centralized process was $1 million, while networks remain responsible for prioritizing "below-threshold" NRM projects. NRM spending during fiscal years 2006 through 2012 was generally consistent with VA priorities: at least 85 percent of the projects funded in each year were identified by networks as priorities. However, VA has not provided written policies for networks on how to apply SCIP criteria to below-threshold projects, which represented over 40 percent of VA's fiscal year 2012 NRM spending. Without such written policies, VA does not have reasonable assurance that network spending for below-threshold NRM projects will be consistent with SCIP criteria. GAO recommends that VA determine why it has overestimated spending for non-NRM and use the results to improve future, non-NRM budget estimates. GAO also recommends that VA provide networks with written guidance for prioritizing below-threshold NRM projects. VA concurred with GAO's recommendations.
A 401(k) plan provides eligible plan participants the opportunity to choose to contribute a portion of their earnings, commonly called elective contributions, to their own individual account in a retirement plan. These contributions may be taken out of an employee’s salary before taxes. Some employees affirmatively enroll in their plan and elect how much of their pay they want to contribute. These employee contributions can be a set dollar amount or a percentage of pay, within annual contribution limits set by the IRS. Some employees are automatically enrolled in employers’ plans and their contributions set at a default rate, though they can opt to adjust the contribution level later. Under federal law, an employee’s own contributions and any returns on those contributions always belong to the employee and are not forfeitable to the plan if they leave their employer. An employer may also contribute to a participant’s account, though not every plan includes an employer contribution. Generally, employers’ contributions to participants’ 401(k) accounts are voluntary, though once incorporated into plan documents contributions must be made as described. Unless the plan is a “Safe Harbor 401(k)”, “SIMPLE 401(k)” plan, or a “SIMPLE IRA”, plan sponsors have flexibility to decide whether to provide employer contributions and how quickly these contributions become vested, to the extent permitted by federal law. Formulas used to calculate employers’ contributions vary across plans. One form of employer contribution is called a match, which is based on the amount that an employee contributes to the plan. Alternately, employers may provide non-matching contributions based on employer profits. Employer contributions may be made solely at the employer’s discretion or may be required by plan documents. Unlike an employee’s own contributions, employer contributions can be forfeited when an employee separates from their job if the employee is not vested in the plan. The Employee Retirement Income Security Act of 1974 (ERISA) provides the legal framework for eligibility policies used by workplace retirement plans, including minimum-age and minimum-service policies. The rules were designed, in part, to help sponsors provide profit sharing contributions, but after 401(k) plans were introduced, employer matching contributions became common. Under ERISA, the maximum age that plans may require as a condition of plan eligibility is 21. The maximum period of service—or length of tenure with an employer—a 401(k) plan may require for plan eligibility is 1 year. (See table 1.) In addition to using eligibility policies, plans can extend the waiting time by taking up to 6 months or until the end of the plan year, whichever comes first, to enroll a newly eligible worker. A 401(k) plan’s vesting policy can require participants to work for a certain period of time with an employer before they can keep all or some of their employer’s contributions to their account and investment returns on that money when they leave their job. Federal laws require that a minimum percentage of employer contributions are vested after a certain period of time; however, plans can choose to allow employer contributions to vest faster or even immediately. The shorter the period of service required for 100 percent vesting, the “faster” the vesting. The minimum percentage that must be vested at a given time depends on the type of vesting policy used by the plan, either cliff or graduated (see table 2). Employer contributions that are vested are considered “nonforfeitable,” which means a participant has an unconditional and legally enforceable right to keep that portion of their account if they separate from their job. When a vesting policy’s required period of service is not fully met, some or all of an employee’s account balance that is attributable to employer contributions is forfeited to the plan. (See figure 1.) Forfeited money can be used by the plan to offset plan expenses and to offset employer contributions. Once the full vesting period is complete, all employer contributions made both before and after that point are fully vested and nonforfeitable. All of a participant’s own contributions, rollovers, and earnings on those contributions are always immediately vested and nonforfeitable. Similarly, in addition to a minimum-service policy required to receive employer contributions, plans may use a “last day policy,” which can require up to an additional year of employment to earn the employer contribution for that year. When used, a last day policy applies to the employer contributions made each plan year, year after year (see fig. 2). According to recent industry data, plans offer matching contributions more commonly than non-matching contributions, which include profit sharing contributions. ERISA also governs the timing of employer contributions. Employers may delay making contributions until their tax return due date for a given year, including extensions. Under federal law, plan sponsors must provide participants with a basic description of the plan, called a summary plan description (SPD), that explains participants’ rights and responsibilities under the plan as well as the plan’s key features, including eligibility and vesting policies. Table 3 provides some key features of an SPD and other required plan documents. Treasury and IRS, an agency within Treasury, share responsibility for overseeing provisions of ERISA applicable to eligibility and vesting in 401(k) plans. IRS has primary responsibility for overseeing eligibility and vesting policies and has promulgated regulations in these areas. For example, IRS regulations state that plans risk losing their tax qualified status if they impose policies that 1) do not specifically refer to service but have the effect of requiring service as a condition for plan participation, and 2) ultimately result in employees being excluded from participating in the plan for a period of time that exceeds the plan’s stated minimum service policy. IRS has also promulgated regulations for rules relating to “year of service.” Additionally, IRS regulations and federal law address the timing of employer contributions–when they must be deposited into participants’ accounts in a defined contribution plan. Treasury is responsible for developing proposals for legislative changes, which could include changes regarding eligibility and vesting requirements. In fulfilling this duty, Treasury prepares the “General Explanations of the Administration’s Revenue Proposals,” referred to as the “Greenbook,” which accompanies the President’s annual budget submission and outlines the President’s tax-related legislative proposals. DOL is responsible for prescribing regulations governing retirement plans in a variety of areas, including reporting, disclosure, and fiduciary requirements. According to DOL officials, DOL could issue guidance on best practices to plan sponsors to help them better communicate plan policies in the summary plan description (SPD). With regard to eligibility, vesting, and related policies, DOL regulations impose certain requirements, such as the requirements that plans: Transmit employees’ own contributions into their account no later than the 15th day of the month following the month in which the money comes out of their pay. Report and disclose certain information to participants regarding eligibility and vesting policies. ERISA specifically grants DOL the authority to prescribe the format and content of the SPD and other statements or documents which are required for plan participants and beneficiaries receiving benefits under a plan. In addition, DOL may prescribe regulations covering the format of these disclosures. U.S. workers are likely to have multiple jobs throughout their careers. Each time an employee begins a new job where a 401(k) plan is offered, the plan’s eligibility policy may affect the employee’s ability to participate. Likewise, every time a plan participant leaves a job, the plan’s vesting policy may affect the participant’s ability to retain employer contributions to their account. According to workforce data collected by the federal government, from 1978 to 2012, the average number of total jobs held by men and women workers from age 18 to 48 was more than 11 (see table 4). The mobility of the U.S. workforce is also reflected by the median tenure, which was 4.1 years for private sector workers in January 2014, according to Bureau of Labor Statistics (BLS) data. Since these data indicate that many U.S. workers switch jobs multiple times during their career, eligibility and vesting policies may affect their accumulated retirement savings multiple times as well. The President’s Fiscal Year 2017 Budget Proposal encourages efforts to better ensure that workers’ job changes do not harm their retirement savings in a package of proposals aimed at increasing workers’ access to retirement plans and increasing the portability of their retirement savings and benefits. The proposal tasks DOL with evaluating existing portable benefits models and examining whether changes are needed. Information from our non-generalizable survey of 80 plan sponsors ranging in size from fewer than 100 participants to more than 5,000, and our review of industry data, show that many 401(k) plans have minimum- age policies that do not allow workers to save in plans until they reach age 21 instead of immediately upon employment. Our survey data found that 43 of 80 plans surveyed have minimum-age policies for plan eligibility, with 21 being the most frequently used minimum age, used by 33 plans. Industry data from the Plan Sponsor Council of America (PSCA), which primarily cover large plans, also show that 376 out of 613 plans have minimum-age policies, with 21 the most frequently used minimum age. According to our analysis of 2008 Survey of Income and Program Participation (SIPP) data from the Census Bureau, an estimated 405,000 workers whose employers offer 401(k)-type plans said they were ineligible to participate in the plan because of a minimum-age policy. This is equal to about 2 percent of workers nationwide who were not participating in 401(k)-type plans offered by their employers that year. Minimum-service eligibility policies require employees to work for an employer for a certain period of time before they can enroll and participate in a 401(k) plan. Fifty of the 80 plans we surveyed reported they have minimum-service eligibility policies. Twenty plans use a 1-year minimum. Industry data from Vanguard also show that about 40 percent of its plans have minimum-service eligibility policies, with a 1-year requirement the most prevalent policy among these plans. The Census Bureau’s SIPP data show that an estimated 16 percent of U.S. workers who do not participate in 401(k)-type plans offered by their employers said it is because they have not worked long enough. Projected to the total workforce nationwide, this would amount to about 3 million workers who are not able to save in their employers’ 401(k) plans because of a minimum-service eligibility policy. Certain types of 401(k) plans may be more likely than others to have policies that require employees to complete a minimum period of service before they can save in the plan. Vanguard found, among its plan clients, that small plans are more likely than large plans to use minimum service eligibility policies. PSCA also found that small plans are more likely to use minimum-service policies, as their data show that the percentage of plans with a minimum-service eligibility policy decreases as plan size increases. Under federal law, plans have discretion to require a period of minimum service for plan eligibility, and some discretion on how long that period will be and how service will be measured. Our survey found that of 80 plans, 20 required employees to work a certain number of hours per year to be credited with a year of service for the purpose of meeting a plan’s minimum-service eligibility policy. Of those 20, 12 required employees to work 1,000 hours during the year, which is the maximum number of hours that may be required under current law. PSCA data covering a larger number of plans show that the 1,000 hour minimum is a common service requirement, used by about 30 percent of plans. According to the most recently available SIPP data, an estimated 24 percent of U.S. workers who reported that they do not participate in their employer’s plan said they do not do so because they do not work enough hours during the year. Minimum-service policies that govern the receipt of employer contributions can prevent plan participants from receiving them. In our survey of 80 plans, 34 reported minimum-service policies for employees to receive employer contributions. Vanguard data show that about 50 percent of plans have such policies, with a 1-year minimum-service policy being the most frequent. Employers can use these policies in addition to minimum-service policies for 401(k) plan eligibility—which determine a participant’s ability to save his or her own earnings in a plan—but ERISA caps the length of service needed to receive employer contributions at 2 years. Therefore, after a 1-year minimum-service policy for plan eligibility, a plan could only require 1 additional year of service prior to a participant receiving employer contributions. The receipt of employer contributions can also be limited by “last day policies”, which are additional policies that can, in some cases, prevent a participant’s receipt of employer contributions. These policies require workers to be employed on the last day of the plan year to be eligible to receive an employer contribution for that year. Plans can use these policies alone or in combination with minimum-service policies. Whether a worker leaves a job voluntarily or is laid off or fired, the worker can be affected by a last day policy even after satisfying up to a 2-year service policy for employer contributions. Our survey of plans found that 19 of 80 plans have last day policies. Delaying employer contributions to employees is another policy that can affect the receipt of employer contributions. Twenty-four plans we surveyed provide contributions, including all types of employer contributions, on an annual basis instead of quarterly or per pay period. With regard to matching contributions, PSCA’s survey found that 18 percent of plans make those contributions on an annual basis rather than on a more frequent schedule (see fig. 3). Our survey data also show that plans often use an annual schedule for contributing to participant accounts in concert with a last day policy. In total, 12 of the 24 plans we surveyed that provide contributions on an annual basis said they also have last day policies. According to Treasury officials, delaying employer contributions so that they are non-concurrent with pay periods, such as making annual contributions, is a long standing practice put into place when employers with defined contribution plans made only profit sharing contributions. Under federal law, plans may make matching contributions as late as the date the plan files its tax return for the previous year, including any extensions, which means that a participant could receive matching contributions for the entire previous year as late as September of the following year. (See fig. 4.) Based on our survey of 80 plans, we found that 57 have vesting policies that require employees to work for a certain period of time before the employer contributions in their accounts are vested. Among survey respondents, a 6-year graduated vesting schedule was used most frequently, meaning a worker must complete 6 years of employment before all of their employer contributions to their account would be retained if they left their job. Industry data from Vanguard show that more than 55 percent of its plans also have vesting policies for matching contributions. A 5-year graduated vesting policy was the most frequent. We also found that plans do not often change their vesting policies. For example, about 70 of the 80 plans we surveyed had not changed their vesting policies over the past 5 years. A retirement professional we interviewed concurred that vesting policies have remained stable in recent years. Based on our survey of plan sponsors and interviews with academic researchers, retirement professionals, and government officials, we found that plans cited lowering costs and reducing employee turnover as the most important reasons for using eligibility, vesting, and other related policies. Our survey asked respondents to identify the importance of various reasons for having eligibility and vesting policies (see table 5). According to retirement professionals, plans use policies that restrict plan eligibility to reduce administrative costs—which include maintaining accounts for separated, short-tenured workers—although these costs are often borne by participants and not by plans. A retirement professional we interviewed explained that after a worker separates from an employer, the plan sponsor must still keep track of the former worker’s account if the worker remains a participant in the plan, and provide the worker with plan communications and account statements, all of which are additional administrative costs. Several retirement professionals we interviewed noted that plan sponsors pay the administrative costs of maintaining small, dormant accounts after short-tenure workers leave their employer. Because young workers have shorter average tenure than older workers, according to federal workforce data, some plans may use a minimum-age policy to exclude the youngest workers and reduce the administrative costs of dealing with short-tenured employees joining the plan and then leaving. However, it is unclear the extent to which additional plan participants increase plans’ administrative costs. Our prior work found that participants generally paid part or all of administrative fees, and that regardless of who incurs added administrative costs for accounts left behind by workers, plans are not obliged to maintain those accounts. Plans can require account holders to transfer their savings out of the plan if their balances fall below a threshold set by federal law. Reducing the direct costs of employer contributions was also cited by plans that we surveyed as an important reason for using eligibility policies, although savings for some plans may be minimal. Minimum- age policies reduce costs because employers do not need to make contributions to otherwise eligible workers. If plans do not use minimum- age policies they can incur some added direct costs in the form of employer contributions to additional workers. However, our projections suggest that the direct costs associated with including workers under age 21 in existing plans may be small because younger workers earn less, on average, than older workers, so the same percentage match from the employer costs less for younger workers. According to a retirement professional, plans may use a minimum-service policy for employer contributions to reduce costs for sponsors because the sponsor avoids making contributions to a participant account if that worker leaves their job before satisfying the policy. Eligibility policies and policies that delay employers’ contributions until the end of the year are also used by some plans for administrative convenience, according to government officials we interviewed and plans we surveyed, though delayed enrollment is possible without delayed eligibility and today’s employer contribution formulas make delayed employer contributions often unnecessary. First, government officials said that some plans may use eligibility policies to delay workers’ enrollment because it takes administrators time to determine eligibility and to process workers’ enrollment in the plan. However, plans may delay a worker’s enrollment up to 6 months after eligibility, which can help plans to mitigate administrative concerns. According to PSCA’s survey, about 25 percent of 401(k) type plans delay the enrollment of newly eligible workers and the rest enroll workers “anytime” after they are eligible. Moreover, a retirement professional told us that, for medium and large-size plans, the payroll systems used to administer plan benefits can support the provision of employer contributions during the year as opposed to just at the end of the year. Second, some plans that we surveyed indicated they also delay employer contributions until the end of the year for administrative convenience. For example, one respondent reported that it is less work for the plan to make employer contributions once, at year’s end. However, a government official told us that the reason most plans make employer contributions each pay period is that doing otherwise would have adverse effects on the plan sponsor by requiring a large outflow of cash at the end of the plan year. By making employer contributions more frequently, a company can spread that cost over the year. A retirement professional told us a likely reason that some plans make delayed employer contributions is employer inertia in keeping a policy that is a remnant of the past, when employer contributions were based on the employers’ year-end profits. However, today, most plans are 401(k) plans and often provide matching employer contributions based on participants’ own contributions, which are typically made each pay period. Vesting policies can also reduce costs for plan sponsors by resulting in forfeited employer contributions when participants separate without fully satisfying the vesting policy. Those forfeitures can offset employer expenses and contributions. According to retirement professionals we interviewed and plans we surveyed, vesting policies reduce the direct cost of employer contributions for shorter-tenure employees who do not stay employed long enough to satisfy the vesting policy and keep employers’ contributions. However, tax benefits to employers from making employer contributions can partially offset direct costs to employers of making contributions to participant accounts, because they reduce an employer’s taxable corporate income in proportion to the amount spent on contributions. Finally, both eligibility and vesting policies were also used to reduce employee turnover by plans we surveyed. Retirement professionals and an academic researcher we interviewed explained that some plans see delayed eligibility in the 401(k) plan as an incentive that may convince new employees to stay longer in their job. Sponsors also use vesting policies to reduce turnover, according to government officials, retirement professionals, and an academic researcher we interviewed. For example, a retirement professional said that companies with a generous matching contribution and high employee turnover prefer to use a vesting schedule because it incentivizes employees to stay with the employer. However, the extent to which vesting policies are effective in changing behavior and reducing turnover depends in some measure on whether participants actually understand vesting policies. For example, if a participant incorrectly believes he or she is fully vested but is actually only partially vested, the vesting policy will not effectively incentivize the worker to extend tenure to fully vest. Plan sponsors use eligibility and vesting policies for a number of reasons. However, based on our projections, we found that although the effects on a worker’s retirement savings can initially be minimal from policies affecting plan eligibility, eligibility to receive employer contributions and the timing of those contributions, and vesting of employer contributions, the cumulative effects can potentially result in significantly lower retirement savings, depending on the policies used (see table 6). (Also see Appendix II for detailed information about the assumptions used in these hypothetical projections.) Currently, the law permits 401(k) plans to require a minimum age of 21 and at least 1,000 hours of service over 1 year for a worker to be eligible to join an employer’s 401(k) plan. As discussed earlier, we found that plan sponsors may use these policies to reduce the costs and challenges incurred when short-term workers enroll in the plan and leave behind small accounts. While the law gives plan sponsors flexibility in establishing plan eligibility policies that meet their needs, federal caps on minimum age and minimum service policies serve to balance plan sponsors’ needs with workers’ interests in accessing and saving for retirement in workplace plans. In addition, because of the potential for compound interest to grow savings over time, it is a widely accepted best practice for workers to start saving for retirement as early as possible, even if the amounts they save seem small. Our projections suggest that these eligibility policies can potentially reduce workers’ retirement savings. Our estimates, which project hypothetical retirement savings, suggest that minimum-age policies can potentially reduce young workers’ future retirement savings. Minimum-age eligibility policies can lower workers’ potential retirement savings through the loss of compound interest as well as employer contributions, and the policies disproportionately affect some groups. Because minimum-age policies can prevent young workers from saving for retirement in their workplace 401(k) plan early in their careers, they miss the opportunity to accrue compound interest and grow their initial contributions over the remaining decades of their working life. For example, an 18-year-old worker earning $15,822 per year who is ineligible to enroll in an employer’s plan until age 21 may forego savings at retirement of $85,857 ($23,258 in 2016 dollars) if they had been able to save and invest 5.3 percent (the average contribution level for non- highly compensated employees reported by plans in PSCA’s 57th Annual Survey) of their salary over those 3 years. Because of the effects of compound interest, saving for retirement at a young age is a one-time opportunity to optimize retirement savings despite making what is typically a low salary relative to lifetime earnings. An older worker has to save more to make the same gains over time, because the returns on savings made later have fewer years over which to compound. (See fig. 5.) (Also see Appendix II for detailed information about the assumptions used in these hypothetical projections.) Minimum-age eligibility policies may have disproportionate effects for some groups. Based on our analysis of data and estimates, we found that certain groups can be disproportionately affected by a minimum-age eligibility policy for 401(k) plans. For example, many young people graduating from high school will not enroll in college but will enter the workforce, only to be potentially excluded from an employer’s 401(k) plan by a minimum age policy. Data from BLS’s Current Population Survey (CPS) show that of 3 million workers who graduated from high school in 2015, about 30 percent were not enrolled in college by October of that year. Recent high school graduates not enrolled in college are about twice as likely to be employed or looking for work as those who are enrolled. Young workers who do not attend college can expect less wage growth over their career, making early savings all the more important. Policies that prevent potential savings by young workers may also disproportionately affect women, who often earn lower wages and may benefit most from maximizing early savings. We have previously reported that women have less retirement income on average than men, partly because women are more likely than men to spend time outside the workforce when they are older. This is a time when income may be higher than earlier in their career and when they might otherwise be able to take advantage of “catch-up” savings opportunities in their workplace plan. A minimum-age policy may prevent some women from saving for retirement when they are fully participating in the workforce, before they may reduce work hours or leave the workforce to provide caregiving support to family members. A minimum-age policy also has implications for low-wage workers. Our projections suggest that the foregone savings of low-wage workers from age 18 to 20 can be an even larger percentage of their retirement savings than for higher earners because low-wage earners may realize less growth in their salary over time, so later contributions do less to make up for savings missed at a younger age. For example, based on our analysis, compared to the loss of 5 percent of retirement savings for a medium-level earner not saving from age 18 to 20, a lower- level earner with reduced wage growth over his or her career could lose 11.5 percent of retirement savings from not saving from age 18 to 20— more than twice the percentage lost by the medium-level earner with average wage growth. Minimum-age eligibility policies may mean foregone contributions from an employer. The amount of foregone retirement savings due to minimum-age policies can be higher when matching employer contributions are considered. For example, based on analysis of our hypothetical scenario, the savings of an otherwise eligible 18-year-old earning $15,822 per year could have been $134,456 at retirement, or $36,422 in 2016 dollars, if that individual had also received an employer match of their contribution up to 3 percent of salary from age 18 to 20. An employer’s match of employee contributions is typically, in effect, a 50 to 100 percent return on the employee’s contributions; participants who receive an employer match can double the value of their contribution—a 100 percent return—if the employer makes a dollar for dollar match. ERISA provides plan sponsors some flexibility to design plan policies that can restrict workers’ eligibility to enroll and save in plans. ERISA specifically permits sponsors to limit enrollment in 401(k) plans to workers age 21 and older. Over 8 million workers under the age of 21 are potentially subject to this policy. In passing ERISA, Congress supported the policy goal of increasing access to plans by allowing workers to save for retirement as early as possible. Increasing access to plans was also a policy goal supported by Congress more recently in passing the Pension Protection Act of 2006. The current minimum-age policy does not further that goal. Extending eligibility to workers at an age earlier than 21 would also give young workers an opportunity to build their private sector savings at the same time they are earning credits toward future Social Security retirement benefits. In addition, increasing workers’ access to workplace retirement plans is a current federal policy goal, reiterated by DOL guidance and recently in DOL’s fiscal year 2017 Budget Justification. Employers may bear costs from enrolling additional workers in plans and administering their accounts after they leave their employ, but our prior work has shown that participants often bear the costs of administering their accounts and plans can use forced-transfers to eliminate small accounts left behind by separated employees. IRS officials told us that the minimum-age policy is unnecessary because the minimum-service eligibility policy permits plans to exclude short-term employees. By extending eligibility to workers at an age earlier than 21, private retirement plan coverage could expand for young workers who research shows lack access to 401(k) plans. We recently reported that when given the opportunity, young, low-income workers participate in workplace plans at high rates. Allowing young people to contribute at the beginning of their careers would also help to mitigate the risk that potential unexpected events could reduce the length of their careers and the period to save for retirement and, thus, their retirement savings. For example, research shows that many workers retire sooner than they expect, due to physical limitations or the need to care for family members. Such individuals will not save for retirement for as long as they had planned or during years in which their contributions may have been highest. Extending eligibility to workers at an age earlier than 21 could help a significant number of workers to save at an earlier age and those who experience unforeseen absences from the workforce or premature retirement will be better positioned to maximize their retirement savings during their working years. Minimum-service eligibility policies of up to 1 year delay access to workplace 401(k) plans and can reduce potential retirement savings for workers of any age, according to our projections of retirement savings. For example, our projections suggest that for a 30-year-old worker earning a salary of $71,841 ($52,152 in 2016 dollars), a 1-year delay in plan eligibility could mean $51,758 less in savings at retirement ($14,021 less in 2016 dollars). That amounts to 3 percent of the worker’s total projected retirement savings from their own savings alone. Including the employer match of 3 percent not received during the 1-year period of ineligibility, the worker could have $81,055 less at retirement ($21,957 in 2016 dollars). A minimum-service eligibility policy also means that any workers excluded from participating in a plan also will not receive any employer contribution to which they might otherwise be eligible. Additionally, the more often a worker changes jobs the larger the potential effect of a minimum-service eligibility policy on the worker’s retirement savings (see fig. 6). (See Appendix II, Table 5 for a similar table with values adjusted for inflation.) A longitudinal study by the BLS found that the average number of jobs for individuals born in the latter years of the baby boom was 11.7 jobs. The same study found that about half those jobs were held between ages 18 and 24. Being ineligible to save in a new employer’s plan for 1 year on 11 occasions, especially occurring more frequently early in a worker’s career, may result in $411,439 less retirement savings ($111,454 in 2016 dollars), based on our projections. Lastly, under the current definition of a “year of service,” some types of workers are likely to remain ineligible to participate in their workplace 401(k) plan indefinitely. For example, long-term part-time workers can be excluded from their employers’ plans regardless of tenure if they work fewer than 1,000 hours during the year, or about 19 hours per week. According to March 2016 data from the CPS, 14.3 million workers said that they usually worked 20 or fewer hours per week over the previous month. Those data also show that more women than men worked 20 or fewer hours per week (making women more likely than men to be ineligible for their workplace plan as a result of the 1,000 hour rule). Even employees working more than 19 hours per week could be subject to ineligibility due to the 1,000-hour rule, if they work multiple part-time jobs. Moreover, under the current definition of a year of service, workers who remain employed on a part-time basis year after year may not be eligible to participate in their workplace savings plans. In first establishing the rules for a minimum-service policy for plan eligibility, ERISA capped such policies at 1 year of service, defined as 1,000 or more hours worked over 1 year. While plans can require fewer hours or no hours of service for plan eligibility, they cannot require more than 1,000 hours. However, millions of part-time workers may never qualify for their employer’s plan with a 1,000-hour requirement. Some members of Congress and the current administration have proposed amending the law to ensure that long-term part-time workers can become eligible to save in their employer’s workplace plan. As part of its fiscal year 2017 budget submission, the current administration proposed amending federal law to require plans to expand eligibility to workers who have worked for their employer at least 500 hours per year for 3 consecutive years, allowing workers to contribute their own earnings, but not necessarily to receive employer contributions to their account. A 2015 Senate Finance Committee bi-partisan working group endorsed this proposal and legislation was introduced that incorporated it. Prior to the current administration’s proposal, legislation was introduced that also would have required plans to cover long-term part-time employees. Given today’s workforce, determining whether the current definition of “year of service” is consistent with the goal of expanding access to workplace retirement savings plans would be beneficial to all workers. Without revising the definition of “year of service”, minimum-service eligibility policies may continue to reduce potential retirement savings for millions of workers who will remain ineligible to participate in a plan because their annual hours of service may fall below their plans’ requirement. Some retirement professionals we interviewed said that operating without a 1,000 hour rule could result in more small accounts left behind by short-term workers, but as we previously noted, forced transfers of small balances can help plans manage any associated burden or cost, and often those costs are borne by participants themselves. Current law permits 401(k) plan sponsors to require participants to be employed on the last day of the plan year to be eligible to receive employer contributions to their account. Current law also permits 401(k) plan sponsors to delay the accrual of employer matching contributions until the end of the year. Information from our survey of 80 plan sponsors and plan professionals showed that plan sponsors often use these two requirements together. Based on our review of relevant statutes and interviews with retirement professionals and government officials, we found that these provisions were created decades ago when 401(k) plans did not exist and when profit sharing contributions were the norm. According to Treasury officials, the provisions met plan sponsors’ need to wait until the end of the year to identify what their profits were for the year and thus what the employer’s contribution would be for the year. However, today most employer-based plans are 401(k) plans, not traditional profit sharing plans, and based on our analysis of industry survey data, most 401(k) plans make matching contributions, which are based on participants’ contributions throughout the year. Although plan sponsors may have previously found these two policies to be beneficial, our projections suggest they may also potentially reduce workers’ retirement savings. The law permits plans to apply policies that limit a participant’s ability to receive employer contributions without additional service each year—last day policies—which can reduce participants’ potential retirement savings. Although last day policies can provide financial benefits to plan sponsors and may also ease plan administration, these policies can reduce potential retirement savings for workers. Given a relatively mobile workforce, the requirement to be employed on the last day of the plan year to receive an employer’s contributions for that year puts workers who separate from their job at risk of losing some of their potential retirement savings. For example, our projections suggest that for a 30-year-old earning $71,841 ($52,152 in 2016 dollars), the employer contribution not received due to an unmet “last day” policy is $2,155 in that year ($1,443 in 2016 dollars). But it could be worth $29,297 by retirement ($7,936 in 2016 dollars), which is 3 percent of the worker’s total projected savings from employer contributions of $969,674 ($262,673 in 2016 dollars) at retirement. Our projections also suggest that a last day policy can reduce potential retirement savings to the same extent as a 1-year minimum-service policy for employer contributions, except the potential savings are lost in the last year rather than in the first. Last day policies can reduce potential retirement savings for even long-tenure, full-time workers who separate from their employers before the official last day of the year. For example, a 67-year-old employee who has worked for the same employer during his or her entire career and retires when eligible for full Social Security benefits could lose the employer’s $6,606 match for that last year ($1,837 in 2016 dollars) if it is before the end of the year. Moreover, the last day policy can affect workers repeatedly throughout their careers. If our hypothetical worker leaves three jobs without satisfying a last day requirement, at ages 20, 30, and 40, the worker could lose a total of $6,542, which could be worth $69,583 at retirement ($18,849 in 2016 dollars). ERISA caps at 2 years the length of service that a plan sponsor can require before a participant is eligible to receive employer contributions. Given the mobility of the workforce, this provision ensures that workers who change jobs are eligible, just like long-tenured workers, to benefit from employer contributions for which employers receive a tax benefit. Given that job turnover is greater among younger workers, ERISA’s cap on a service policy that delays employer contributions also helps to ensure that workers’ savings in these tax-advantaged accounts are not excessively diminished at a time when, our projections suggest, contributions have the greatest ability for compound earnings over time to improve retirement savings. However, because a last day policy requires an additional year of service to receive any employer contributions, year after year, a worker who has already satisfied a 2-year minimum-service policy would have to wait up to another year to receive an employer contribution. (See Appendix II Table 8 for examples of how a last day policy could potentially reduce retirement savings.) IRS oversees provisions of federal law applicable to eligibility and vesting and Treasury is responsible for developing proposals for legislative change in these areas. However, according to an IRS official, the law permits 401(k) plans to use a last day policy and, as long as the law’s non-discrimination and coverage rules are satisfied, plans are generally free to structure their plans as they choose. In addition, according to that IRS official, the flexibility provided to plans by ERISA prevents IRS from prohibiting plans’ use of a last day policy. Considering whether an adjustment to the law’s provisions regarding plans’ use of a last day policy is needed could help to ensure that 401(k) plan policies reflect the current mobility and characteristics of today’s workforce. Plan policies that delay employer contributions so that they are paid at the end of the year rather than being paid in tandem with employee contributions throughout the year affect participants’ opportunities to earn compound interest on investment returns on the employer’s contribution to their account over the course of the year. Our projections suggest it may also reduce participants’ potential retirement savings. In contrast, regular employer contributions, such as those made bi-weekly, allow participants to potentially profit from the investment of that money and the reinvestment of those profits. Delayed employer contributions may seem negligible at first, but, if left to compound over time, our projections suggest that the return on that employer contribution can amount to significant savings. Moreover, while the immediate value of savings lost in a single year can be relatively small, the potential value of that lost opportunity compounds year after year. For example, our projections suggest that for a worker who remains with one employer throughout his or her career, the delay of the employer’s contribution until the end of the year, each year, could mean $35,636 less in total savings at retirement ($9,653 in 2016 dollars) than if the employer’s contribution was made on a per pay-period basis (bi-weekly), in concert with the worker’s own contributions. That is about 3.7 percent of our hypothetical worker’s total 401(k) retirement savings based on employer contributions alone. ERISA establishes rules for the accrual of retirement benefits in workplace retirement plans. Those rules permit flexibility as to when an employer’s contributions go into—or accrue to—an individual’s retirement account. ERISA permits plan sponsors to delay making the employer contribution to participants’ accounts until as late as the end of the year or the date when the plan’s tax return is filed, including extensions. However, the law permitting delayed employer contributions is from a time when profit sharing plans were the norm and plans typically waited until the end of the year to calculate and distribute employer contributions. Those plans predate 401(k) plans and the matching contributions that are now commonplace. Treasury officials told us that, because plan sponsors today generally make contributions that match a participant’s own deferrals, delayed employer contributions are no longer necessary in most defined contribution plans. In addition, delayed employer contributions are inconsistent with the best practice of saving for retirement as early as possible. While IRS has primary responsibility for overseeing eligibility and vesting policies and Treasury is responsible for developing proposals for legislative change in these areas, the provision permitting delayed employer contributions can only be changed by statute and not by either agency. Considering whether the law’s provisions regarding the timing of matching employer contributions should be adjusted could be an opportunity to help ensure the provisions reflect the current mobility and characteristics of today’s workforce. Our projections suggest that vesting policies can also reduce retirement savings when participants leave their job and the vesting policy is not satisfied in full. (See fig. 7.) For example, our projections suggest that for a worker who twice separates from employment (at age 20 and 40) after 2 years without satisfying a 3-year cliff vesting policy, forfeiting the employer contributions already in their account, the lost savings could have grown to $81,743 by retirement ($22,143 in 2016 dollars). Caps for vesting policies are set by federal law and, over time, have been shortened to provide for faster vesting to make it easier for workers in a mobile labor force to keep employers’ contributions and to more easily build their savings for retirement. As noted earlier, Treasury is the federal agency that would be responsible for developing proposals for legislative change with regard to vesting. However, a Treasury official told us that the agency has not recently proposed any changes to the vesting rules and has not conducted an assessment to determine what vesting policies are appropriate today. Based on our survey of plan sponsors and plan professionals, vesting policies are often used to reduce employee turnover. In addition, according to retirement professionals we interviewed, vesting policies reduce the direct cost of employer contributions for shorter-tenure employees who do not stay long enough to satisfy the vesting policy and keep employer contributions. Nevertheless, current federal policies seek to improve retirement security for today’s mobile workforce, including increasing the portability of workplace retirement plan savings. However, our projections suggest that current vesting policies can potentially reduce a participant’s retirement savings when vesting requirements are not met in full. An evaluation of the effects of current vesting policies on participants’ retirement savings may help to identify if those policies remain appropriate for a mobile workforce increasingly dependent on their employer-based retirement accounts, and help determine how vesting policies affect the portability of retirement savings. To examine plan participants’ understanding of eligibility, employer contribution, and vesting policies, we analyzed 46 responses to an online survey administered to participants in four 401(k) plans in which they were asked about their own plans. Responses to the survey show that participants’ knowledge varied. (See table 7.) The accuracy of responses was highest regarding the frequency of the employer’s contribution, participants’ initial eligibility to join the plan and the types of employer contributions made. However, based on our evaluation, some participants surveyed lacked knowledge of their plans’ eligibility, employer contribution, and vesting policies that, as our projections discussed earlier suggest, can have important effects on retirement savings. Our survey results may reflect some degree of unwarranted confidence by plan participants regarding their understanding of plan policies. According to a behavioral finance expert, about one-quarter of people generally overestimate their understanding of finance-related terminology. An individual’s lack of understanding of their employer’s policies can result in suboptimal choices, such as choosing a job with a minimum-service eligibility policy over one offering immediate eligibility, when wages, working conditions, and other benefits are comparable. Approximately two-thirds of respondents gave answers consistent with the eligibility rules in place at their plan when they started their job, which may reflect a fairly high level of education and long tenure among workers we surveyed, but about a third of the participants we tested were incorrect about their eligibility to join their employer’s 401(k) plan. Among those not immediately eligible to join the plan and asked to identify the reason for their initial ineligibility, more than half provided incorrect answers. That result is generally consistent with a national survey on self-reported financial literacy, which asked about 5,000 current defined contribution plan participants to assess their own knowledge of “eligibility requirements,” among other topics. That survey found that while 85 percent felt they had a working knowledge of and understood the term “eligibility requirements,” nearly half (46 percent) were not confident enough in their understanding to teach others (see fig. 8). Our survey results show that respondents had a good understanding of their employer’s contribution. The high number of correct answers to the question about how frequently employer contributions were made was consistent with information we heard from a retirement expert we interviewed who said that workers generally have a good understanding of their employer’s matching contribution to their 401(k) plan account. Workers who do not understand the significance of the timing of employer contributions may not be well prepared to weigh such policies before choosing to join or leave an employer. While more than half of participants we surveyed were correct about whether they are required to be employed on a specific day of the year to receive the employer’s contribution, which could indicate a last day policy, more than a third (18) did not know or did not answer the question. Understanding an employer’s last day policy is important because, as our projections suggest, the policy could potentially reduce retirement savings. Lastly, understanding of vesting requirements was mixed. More than half of participants gave accurate answers regarding their vesting status, given their plans’ policies. Clearly written plan documents may also have helped those participants who understood their vesting status. Federal law requires that a plan’s summary plan description (SPD) be written in a clear manner and uses a table to describe the maximum vesting schedule for 2- to 6-year vesting, which some plans use as a model in their SPD. At one of the companies where the majority of participants gave correct responses, the plan documents clearly stated the plan’s 5-year graduated vesting policy and the calculation of vesting status based on different lengths of service by using a table (see fig. 9). The national financial literacy survey found that respondents’ self- reported understanding was also high regarding the definition of a “vesting period.” The survey found that about three-quarters of individuals surveyed thought they had either a working knowledge of the term “vesting period” or understood it well. However, not all participants we surveyed understood their vesting status. Some respondents did not know if their employer contribution was fully vested, did not attempt to answer this question, or incorrectly believed that their employer contribution was already fully vested, when it was not. Describing the vesting policy that they had to meet or will have to meet to become vested was also difficult for some participants. For example, one participant we surveyed said that he was required to work for 5 years to be fully vested even though the plan had a 6-year vesting schedule. We found evidence that some eligibility and vesting policies in summary plan descriptions (SPD) can be unclear. Our review of five SPDs found that some eligibility and vesting policies were written using complex technical language which may make them less likely to be easily understood by the average plan participant. (See text box.) Participants in two discussion groups comprised of plan sponsors and plan advisors which we convened in March 2015 also said that plan participants probably do not understand the eligibility and vesting requirements of their plan because the SPDs use complex legal terms. Furthermore, one retirement professional we interviewed said that employers’ own difficulty understanding their plans’ eligibility and vesting policies contributes to employees’ misunderstanding of these policies. According to a retirement professional, errors in interpretation can occur, making the clarity of plan documents more important. Example of Complex Language in a 401(k) Summary Plan Description You will become a Participant eligible to make Elective Deferral Contributions and receive Safe Harbor Non-Elective Contributions and Profit Sharing Contributions on the a) first day of the first month of the Plan Year or b) first day of the seventh month of the Plan Year, coincident with or next following the date you attain age 21 and you complete one (1) Year of Eligibility Service, provided that you are an Eligible Employee on that date. You have worked for your Employer four (4) years and have received Employer Contributions of $1,000. You terminate employment and request a distribution of your Employer’s Contributions. Because you have four (4) years of vesting service, you will receive 60% or $600. While this example illustrates what happens as contributions become vested, it still cites what will be received rather than what could be lost. Instead of informing a participant of what they will receive if they are not fully vested when they leave their job, an employee may better understand the financial consequences of the vesting policy if the employer clearly informs them that they will lose a percentage of their account balance if they leave before becoming fully vested. However, according to a retirement professional we interviewed, employers seeking to attract employees have an incentive to make the plan sound generous. Plans can also include information in their SPDs about provisions that are not used by the plan, a practice which is not explicitly prohibited by ERISA, making it difficult for participants to know which contribution is currently being offered and which eligibility and vesting policies apply. One plan sponsor explained that this is a common practice because plans want to avoid the time and expense of revising the plan documents later if they decide, for example, to change the type of employer contribution. One SPD we reviewed contained information about six employer contributions when only one contribution was offered (see fig. 10). This practice leaves participants with extraneous information and no clear way to tell what policies apply to them without additional information from their employer. Treasury officials said that it may be necessary for employers to describe contributions that they are not currently offering to employees, should they need to make these contributions at a later date in order to pass the Internal Revenue Code nondiscrimination tests. However, those officials agreed that the description of multiple employer contributions in SPDs can be confusing to participants as they may not be able to determine which contribution currently applies to them. Employers can do more than the minimum required communication and look for innovative and effective ways to improve participant understanding of plan policies. Our survey of 80 plan sponsors found that the highest number reported using new employee orientation and a welcome packet (63 and 62, respectively) to communicate eligibility and vesting policies to employees. One participant advocate we interviewed suggested that discussing policies one-on-one is the best way to communicate rules to employees, but said that approach is costly. Another suggestion was to tailor plan information to participants. For example, that participant advocate suggested that a plan notice communicating eligibility policies could read: “as of (specific date), you are eligible to participate in the plan (or to get an employer match).” The advocate also suggested that communications could increase in frequency ahead of the eligibility event, like a countdown, referring to eligibility like a prize to build excitement. Specifically, messages such as, “congratulations, you are now eligible…” can be effective in triggering behavior, like enrollment in the plan. Several experts we interviewed generally agreed that employers should simplify communications about plan eligibility and vesting policies to increase employee understanding, which could mean presenting information in a more concise, manageable format. Figure 11 shows an example of a short summary of plan highlights, which was provided by one plan service provider we interviewed. The first page of the 2.5 page document summarizes the plan’s basic rules for eligibility and vesting, does so using short sentences, and does not require the reader to refer to other sections of the document to fully understand the rules. ERISA requires that summary plan descriptions (SPD) be written in a manner that can be understood by the average plan participant and be sufficiently comprehensive to inform participants of their rights and obligations under the plan. In addition, SPDs must explain a plan’s provisions with respect to eligibility and vesting, but plans have discretion in how they present these provisions. Under ERISA, DOL is responsible for enforcing requirements pertaining to the disclosure of companies’ retirement plan policies, including that plan policies should be communicated clearly. To do this, DOL issues regulations, can make judgments about the clarity of plan documents, and can issue guidance to plans on how to best comply with the intent of the laws and regulations. While its regulations restate the requirements in ERISA and list specific policies that should be included in the plan description, the agency is not more specific about what constitutes clear communication of policies and what does not. Because ERISA and DOL’s regulations pertaining to SPDs are not prescriptive about how plan sponsors can explain their plans’ eligibility and vesting policies clearly, plan sponsors may provide information in a way that meets the necessary requirements rather than in a manner most likely to be clear and helpful to participants. DOL also has a policy that it will not make determinations about the clarity of plan documents. Agency officials told us that they have this policy because determining whether specific wording is clear is highly subjective. Currently, DOL has not issued guidance that identifies best practices for communicating information on eligibility and vesting, which could assist plan sponsors with improving the clarity of those policies in SPDs. By providing guidance with best practices to help plan sponsors clearly and accurately communicate eligibility and vesting requirements, DOL can help ensure that workers better understand the information necessary to make informed choices regarding their employment and savings behavior. In addition, such guidance could include encouraging plans to provide information only on contributions actually made by employers—a best practice which could help participants better understand the plan policies that affect them. 401(k) plans were created four years after ERISA was passed in 1974. Since that time, 401(k) plans have become the dominant employer- sponsored plan relied on by employees for retirement savings. Understanding the eligibility and vesting policies used by these plans has become increasingly important for workers, employers, and regulators. However, some of these policies were created to address issues when plans were more of a supplemental source of retirement income, in addition to a traditional pension, and reliant only on non-matching contributions from an employer rather than on a match of employee contributions. As 401(k) plans become the primary, and often sole, retirement savings vehicles for a large segment of the mobile workforce, there is a growing need to consider the effects of eligibility and vesting policies on workers, particularly those who are younger, less-educated, and with lower incomes. Moreover, one of the reported advantages of account-based plans like 401(k) plans is their enhanced portability over traditional pensions. Yet current rules and plan sponsor practices suggest some limitations on that portability, which can have potentially significant effects on retirement security. Workers who must meet minimum-age eligibility policies to begin participating in a workplace plan miss out not only on contributing to 401(k) accounts, but also on the opportunity to receive employer matching contributions, which can significantly increase the amount contributed to their accounts. Saving early is particularly important for those who join the workforce out of high school and may never pursue higher education, with its associated higher wages. Our projections show that the inability to save in a 401(k) plan from ages 18 to 21 can result in tens of thousands of foregone retirement savings. While turnover among young workers can create administrative costs for an employer, these costs often are borne by participants in the form of fees. Extending plan eligibility to allow otherwise eligible workers to at least save their own contributions in their employers’ 401(k) plans at an age earlier than 21 could help young workers improve their retirement security by saving through their workplace plan at a time when those savings have the most to gain. Minimum-service eligibility policies can also affect employees’ ability to save for retirement. Re-examining the legal definition of “year of service” used in minimum-service eligibility policies can help to ensure that the rules are consistent with today’s mobile workforce and use of 401(k) plans. Opening 401(k) plans to more workers could result in additional small accounts, which are sometimes abandoned by their owner. But that challenge can be mitigated in ways that do not reduce savings. At the same time, opening 401(k) plans to more workers could help many who now lack coverage to access a workplace retirement plan and make tax- deferred contributions toward their retirement security. Last day policies can also affect employees’ retirement savings. Given the mobility of today’s workforce, all workers are potentially affected by these policies. Further, policies that allow employers to delay making matching contributions to participants until the end of the year can also result in foregone savings. This type of policy can affect not only employees who separate from a job after a year or two, but also those who spend their entire career with the same employer. The policy is a remnant of the profit sharing plan era, when employers typically calculated benefits at the end of the year. Few employers rely on profit sharing contributions alone these days, so delayed employer contributions may be unnecessary. Considering whether these provisions should be adjusted could help ensure they reflect today’s mobile workforce and use of matching contributions in 401(k) plans. Vesting policies also present missed opportunities to improve savings for workers who are mobile. Given that the median length of stay with a private sector employer is currently about 4 years, the rule permitting a 6- year vesting policy may be outdated. Employees who forfeit employer contributions to their account when they leave a job prior to the end of the plan’s vesting period lose the opportunity to have those funds grow in the plan or to transfer those contributions into their new employer’s plan, reducing their retirement savings. A re-examination by Treasury of the appropriateness of current maximum vesting policies could help determine whether they unduly reduce the retirement savings of workers who change jobs. Finally, having clear and concise information about their retirement plan’s eligibility and vesting policies helps employees make informed decisions affecting their retirement savings. Guidance from DOL can help plan sponsors better inform participants about the plan policies that they must understand to make optimal decisions. To help increase plan participation and individuals’ retirement savings, Congress should consider updating ERISA’s 401(k) plan eligibility provisions to: extend plan eligibility to otherwise eligible workers at an age earlier amend the definition of “year of service,” given the prevalence of part- time workers in today’s workforce. In addition, Congress may wish to consider whether ERISA’s provisions related to last day policies and the timing of employer matching contributions need to be adjusted to reflect today’s mobile workforce and workplace plans, which are predominantly 401(k) plans offering matching employer contributions. To ensure that current vesting policies appropriately balance plans’ needs and interests with the needs of workers to have employment mobility while also saving for retirement, Treasury should evaluate the appropriateness of existing maximum vesting policies for account-based plans, considering today’s mobile labor force, and seek legislative action to revise vesting schedules, if deemed necessary. The Department of Labor could provide assistance with such an evaluation. To help participants better understand eligibility and vesting policies, DOL should develop guidance for plan sponsors that identifies best practices for communicating information about eligibility and vesting policies in a clear manner in summary plan descriptions. For example, DOL could discourage plans from including in documents information about employer contributions or other provisions that are not actually being used by the plan sponsor. We provided a draft of this report to the Departments of the Treasury and Labor, and the Internal Revenue Service. Treasury provided technical comments, including those of IRS, which we have incorporated where appropriate, and oral comments, as discussed below. DOL provided written comments, which are summarized below and reproduced in Appendix III. With respect to our recommendation that Treasury evaluate existing maximum vesting policies, Treasury had no formal comment. As we detail in our report on pages 44-46, given the effect that vesting policies can have on the retirement savings of mobile workers, we believe that it would be beneficial for Treasury to evaluate current vesting policies. Treasury may be able to incorporate an evaluation of these policies into the analysis it conducts in preparing the annual “Greenbook,” highlighted in our report on page 12, and which accompanies the President’s annual budget submission and outlines the President’s tax-related legislative proposals. DOL, in its written comments, stated that substantive provisions of Title I of ERISA governing eligibility and vesting provisions in 401(k) plans are under the interpretive and regulatory jurisdiction of the Secretary of the Treasury. DOL also stated that Treasury and IRS generally consult with DOL on subjects of joint interest and it expects they will do so regarding our report. With regard to our recommendation to develop guidance for plan sponsors that identifies best practices for communicating information about eligibility and vesting policies in a clear manner in summary plan descriptions, DOL agreed that disclosures explaining these policies are important to participants’ ability to make informed choices about retirement savings. DOL also stated that under current law and regulations, this information must be written in a manner that can be understood by the average participant. DOL described planned actions that GAO believes are consistent with the intent of our recommendation. For example, DOL noted that an evaluation of best practices regarding eligibility and vesting should consider other disclosures provided to plan participants. DOL highlighted a long-term project on its current regulatory agenda relating to individual benefit statements, another type of disclosure provided to participants. DOL said that additional input from a broader range of plan sponsors and plan fiduciaries, possibly obtained through a Request for Information published in the Federal Register, could supplement the information we highlight in our report on pages 52- 58 and contribute to an informed development of best practices guidance. DOL stated that it did not agree that implementing the recommendation allows the best use of its limited resources. DOL stated it would not be appropriate at this time to reallocate resources away from its existing priority projects to a new best practices project focused on our recommendation, especially given the regulatory requirements that currently apply to summary plan discrimination disclosures on eligibility and vesting. However, DOL stated it would review its existing outreach materials on plan administration and compliance for opportunities to highlight the issues we raised in our report, as well as consider our recommendation in the ongoing development and prioritization of its agenda for regulations and sub-regulatory guidance. We agree with DOL that the efforts it plans to take in response to our recommendation, if fully implemented, will meet the intent of the recommendation and help plan sponsors more clearly communicate eligibility and vesting policies to plan participants without developing guidance. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of Labor and the Treasury, the Commissioner of Internal Revenue, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or jeszeckc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This report examines: (1) what is known about the prevalence of 401(k) plans’ eligibility and vesting policies and why plans use them, (2) the potential effects of eligibility and vesting policies on workers’ retirement savings, and (3) participants’ understanding of these policies. We did not examine the use of these policies by defined benefit plans or by public retirement plans. We reviewed survey data, industry data, and conducted interviews to determine what is known about the prevalence of eligibility and vesting policies and why plans use them. To identify what is known about the prevalence of eligibility and vesting policies, we conducted a non- generalizable survey of plan sponsors and plan professionals with plans ranging in size from less than 100 participants to more than 5,000. We received 80 responses to this survey. We also reviewed nongeneralizable data from Plan Sponsor Council of America’s (PSCA) annual survey of defined contribution plans, which covered 613 plans with 8 million participants and $832 billion in plan assets. PSCA’s dataset represents plans of varying sizes and industries, but includes a greater proportion of large plans based on participant numbers and assets and disproportionately represents the financial, insurance, and real estate industries when compared to the total population of plans, as measured by 2013 Form 5500 filings. In addition, we reviewed non-generalizable data from Vanguard’s 2015 report on the 1,900 qualified plans for which it serves as record keeper. These two were the only datasets we identified with detailed information on eligibility and vesting policies. We evaluated another large dataset and determined that it was not reliable enough for our purposes. We also analyzed generalizable Census Bureau survey data to identify which eligibility policies affect workers, as reported by the workers themselves. To assess the reliability of these data, we interviewed staff from Vanguard and the Census Bureau who are responsible for producing the data. We also reviewed documentation related to these data sources, such as descriptions of their methodology. We also performed various tests on the data. For the PSCA data, we compared selected data from PSCA’s 2013 survey—the survey we used—to data from PSCA’s 2012 survey to assess the consistency of the data. Specifically, we reviewed basic survey respondent demographic data (respondents by plan type, amount of plan assets, and industry) and key data on eligibility and vesting (the percentage of plans with service restrictions for plan eligibility, the percentage of plans with age restrictions for plan eligibility, and the percentage of plans that have vesting policies that require a service period for the vesting of matching employer contributions). We also evaluated the makeup of the plans included in PSCA’s survey by comparing selected demographic data from PSCA’s survey to Department of Labor Form 5500 plan demographic data. For the Vanguard data, we asked a Vanguard official who is responsible for the data questions about the scope of the data, how the data are collected and maintained, and whether they have any limitations. We also compared the eligibility and vesting data to the data from PSCA’s survey and our own survey of plan sponsors. For the Census Bureau data, we performed electronic testing for missing data, outliers, and obvious errors. Based on these steps, we determined that the PSCA, Vanguard, and Census Bureau data were sufficiently reliable for our purposes. We also interviewed government officials from the Securities and Exchange Commission, the Internal Revenue Service, and the Department of Labor, as well as retirement professionals, to determine whether they were aware of any data sources regarding the prevalence of eligibility and vesting policies and to obtain their perspectives on the policies’ prevalence. See “Selection and Categorization of Interviewees” later in this section for more information on our interviews. To determine why plans use eligibility and vesting policies, we surveyed plan sponsors and plan professionals by including a link to our survey in four industry publications (80 responded). The questions asked respondents to identify and rank the importance of multiple factors in their decision to use specific eligibility and vesting policies. To supplement our survey, we interviewed government officials, retirement professionals, and academic researchers to obtain their perspectives on why plans use eligibility and vesting policies. For additional context and perspectives, we also held two structured group interviews at a regional defined contribution plan conference with an open invitation to plan sponsors and plan service providers to discuss the reasons why plans use these policies. To examine the policies’ potential effects on retirement savings over time, we developed hypothetical scenarios to illustrate what the effect could be for an individual based on a number of assumptions. For the hypothetical projections, we made a number of assumptions regarding salary levels, employee deferrals, employer matches, and investment returns drawn from federal and industry data sources, including Bureau of Labor Statistics data, PSCA’s annual defined contribution plan sponsor survey on plan policies used, and the Social Security Trustees’ report (2015 Trustees Report, long range projections, intermediate assumptions). (See Appendix II for a detailed explanation of the assumptions used and additional tables providing more projections for comparison as well as inflation-adjusted values.) We also interviewed officials from the Department of Labor, Treasury, the Internal Revenue Service, and the Securities and Exchange Commission, as well as a total of 21 retirement professionals and academic researchers to discuss what is known about these policies’ effects on savings over time. To assess participants’ understanding of eligibility and vesting policies, we used data from the National Association of Retirement Plan Participants’ (NARPP) Participant Survey: Study of Financial Empowerment Literacy and Trust (FELT survey) to identify participants’ understanding of financial terms, and generalizable data from a Defined Contribution Plan 2015 Study of Participant Satisfaction and Loyalty (DCP) to identify the decisions that participants would make about their employment based on their understanding of their companies’ 401(k) policies. NARPP’s FELT surveys approximately 5,000 current 401(k) and 403(b) participants yearly, and was administered in April 2015. To assist us with our work, NARPP agreed to include the terms “eligibility requirement” and “vesting period” to the list of terms already included in their financial literacy survey. The DCP surveys an internet panel of over 5,000 respondents and was administered during the month of April 2015. NARPP provided the surveys’ summary responses to us for our analysis. To assess the reliability of the data from the DCP and FELT surveys, we interviewed an official from Boston Research Technologies who is responsible for managing the two surveys. We also independently conducted manual tests of the DCP survey data and compared selected survey responses to similar data from other samples of defined contribution plan participants. We found the data to be sufficiently reliable for our purposes. We did not perform separate tests on the FELT survey, which uses the same sampling frame and procedures as the DCP survey. We further assessed participants’ understanding of eligibility and vesting policies by administering a questionnaire to individuals testing their knowledge of the eligibility and vesting policies used by their 401(k) plans and comparing their answers to the actual policies in their plans’ summary plan descriptions (SPD), summaries of material modification, and exchanges with plan administrators . This allowed us to determine the accuracy of their knowledge rather than rely on their self-reported knowledge. We analyzed the accuracy of the 46 responses by reviewing plan documents provided by the sponsors or their service providers and emailed the sponsors directly to confirm that we understood the plan policies correctly, as necessary. See Survey Data Sources below for details on the survey design and implementation. To further assess the clarity of plans’ eligibility and vesting policies, we reviewed the SPDs of five companies, including those we surveyed regarding participant understanding. One SPD was provided by a third-party plan administrator who we interviewed. Additionally, we conducted online searches for relevant literature and asked interviewees to advise us regarding relevant studies and papers. We then assessed the relevance of the studies to our research questions to include those findings, as appropriate. Also, to understand what sources of information on eligibility and vesting policies participants typically have, we held two discussion groups with plan sponsors and plan advisors and asked them to identify the methods plans use to communicate their policies. Results of our review of the SPDs and discussion groups are not generalizable, but provide additional context and perspective. Finally, we interviewed retirement professionals and academic researchers, as discussed below, to get their perspectives on participants’ understanding of eligibility and vesting policies and to invite their suggestions on how this information can be more effectively presented to improve participant understanding. For example, we spoke to several financial literacy experts who made observations about financial decision making more generally and what that could tell us about eligibility and vesting policies specifically. Design and implementation. We developed a web-based questionnaire for plan sponsors and plan professionals to collect information on eligibility and vesting policies. The questionnaire included questions on the types of eligibility and vesting policies plans use and the reasons they use these policies. In addition, the questionnaire included questions on the timing of employer contributions, including whether plan participants have to be employed on the last day of the year to receive employer contributions and the schedule for making employer contributions. Throughout the questionnaire, our questions focused on employer contributions and did not distinguish between matching and non-matching contributions. To inform our understanding of participants’ understanding of eligibility and vesting policies, the questionnaire also included questions regarding their views on the degree to which participants understand these policies and how they are informed of the policies. To minimize errors arising from differences in how questions might be interpreted and to reduce variability in responses that should be qualitatively the same, we conducted pretests with three individuals external to GAO. The individuals included: an academic researcher, a retirement professional, and a benefits coordinator for a manufacturing company. Further, two GAO staff with expertise in the retirement area and two staff with expertise in survey design reviewed the survey for content and consistency. Based on feedback from these pretests, we revised the questionnaire in order to improve question clarity. For instance, in response to the benefits coordinator’s suggestion to clarify the language used in a question focused on whether certain types of employees are excluded from the plan, we modified the question to clarify that we were asking whether employees in certain job classifications are excluded from the plan. After completing the pretests, we administered the survey. Starting in May 2015, we asked three industry groups to announce – through their publications – an invitation for plan sponsors to complete our survey. These groups included a link to our survey in their publications. Those publications were: an email newsletter published by PLANSPONSOR, Plan Sponsor Council of America’s annual survey of plans, and Pensions & Investments’ Plan Sponsor Digest and Pensions & Investments Daily, which are two publications directed at plan sponsors. We also included a link to the survey in an online forum for American Society of Pension Professionals and Actuaries (ASPPA) members who are owners or senior managers of plan administration firms. We directly received responses through August 31, 2015. We received 80 completed surveys. We cannot report a response rate as it is possible that respondents submitted multiple surveys or individuals responded who were not plan sponsors. Sponsors and plan professionals could respond anonymously, and some respondents did not provide contact information. Analysis of responses and data quality. We used standard descriptive statistics to analyze responses to the questionnaire. Because this was not a sample survey, there are no sampling errors. To minimize other types of errors, commonly referred to as non-sampling errors, and to enhance data quality, we employed recognized survey design practices in the development of the questionnaire and in the collection, processing, and analysis of the survey data. For instance, as previously mentioned, we pretested and reviewed the questionnaire with individuals internal and external to GAO to enhance the clarity of our questions, which minimizes the likelihood of errors arising from differences in how questions might be interpreted and helps to reduce the variability in responses that should be qualitatively the same. To help reduce nonresponse, another source of non-sampling error, we asked the industry groups that publicized the survey to re-publicize it to further encourage respondents to complete the survey. In reviewing the survey data, we performed automated checks to identify inappropriate answers. We further reviewed the data for missing or ambiguous responses and followed up with respondents when necessary to clarify their responses. On the basis of our application of recognized survey design practices and follow-up procedures, we determined that the data were of sufficient quality for our purposes. Design and implementation. We developed a web-based questionnaire directed at plan participants to collect information on their understanding of their companies’ 401(k) eligibility and vesting policies. The questionnaire included questions about the individual’s own eligibility and vesting status. In addition, the questionnaire included questions on the timing of employer contributions, including whether plan participants have to be employed on the last day of the year to receive employer contributions. The participants’ responses and our analysis of their accuracy are not generalizable. To minimize errors arising from differences in how questions might be interpreted and to reduce variability in responses that should be qualitatively the same, we conducted pretests with two subject matter experts and conducted pretests with three plan participants from external companies. We also reviewed the survey with internal survey experts. Based on feedback from these pretests, we revised the questionnaire to improve question clarity. After completing the pretests, we administered the survey using an online platform. To identify participants to test, we first invited every plan sponsor who had provided contact information to us when completing a separate plan sponsor survey to distribute a separate questionnaire to their plan participants. Four plans agreed to distribute a hyperlink to our participant questionnaire. We sent a link to the survey to the participating sponsors and asked them to disseminate the link and invite their plan participants to complete the questionnaire. Starting on December 8, 2015, we made the survey available for the participants to complete. While we asked the plans to send periodic reminders, we did not have direct access to the participant groups and we were not included on those communications. We did not select the participants in any way or have input into who responded to the survey. The survey closed on January 27, 2016. We received 50 completed questionnaires and analyzed 46 of them. In the process of comparing the survey responses to the plan descriptions, we manually reviewed each survey response for outliers or errors in skip patterns. We excluded two surveys because the respondents had no 401(k) account but said they were eligible so they were not asked any further substantive questions. Two others were excluded because they began employment on a date before which we had a record of plan policies, so we could not analyze the accuracy of their responses. Some respondents did not click the “completed” button at the end of the survey and were not included in our analysis. We cannot report a response rate as it is possible that respondents submitted multiple surveys. In addition, most respondents did not provide contact information. Evaluating the accuracy of participant responses was the purpose of this survey, so our findings in that regard are an assessment of data quality. Analysis of participant responses and data quality. To describe what is known about participants’ understanding of their companies’ 401(k) plan eligibility and vesting requirements, we reviewed participant responses at the four companies we surveyed. We compared the employee responses regarding the eligibility and vesting requirements that applied to them when they began working to information in the respective summary plan description (SPD), and obtained clarification from plan officials, as needed. Plans may issue new plan documents and change policies over time, so to test individuals’ knowledge of the eligibility policies that affected them when they were hired, we used the policies in place at that time rather than the eligibility policies in place most recently, when they differed. For example, to determine the employee’s eligibility to join the plan, we identified the employee’s start date and determined if there were any factors that would have prevented them from immediately enrolling in the plan. In cases where we did not have the plan document for the year the participant was hired, we reviewed the plan document published before and after their hire date and reviewed summaries of material modifications. If the information about the plan policies in the two plan documents from before and after their date of hire were the same, we used it to determine the accuracy of the participant’s answer. However, when we could not determine the plan policy for the relevant time period, we excluded the participant response from our analysis of that question. For policy questions regarding current plan policies (like frequency of employer contributions, and if the participant is required to work on a particular day to receive employer contributions) we used the current plan policy to determine the accuracy of responses. We reviewed the plan details to identify information about the vesting schedule. Three of the four companies we surveyed currently offer automatic enrollment to eligible workers. Automatic enrollment is a plan feature by which eligible workers are enrolled in the plan by default and can opt-out if they do not wish to participate in the plan. Administered by the Census Bureau, the SIPP is a household-based survey designed as a continuous series of national panels. The Census Bureau uses a two-stage stratified design to produce a nationally representative panel of respondents who are interviewed over a period of approximately 3 to 4 years. Within a SIPP panel, the entire sample is interviewed at various intervals called waves (from 1983 through 2013, generally 4-month intervals). In addition to income and public program participation, the SIPP includes data on other factors of economic well- being, demographics, and household characteristics. We used data from the most recent relevant data set, the 2008 SIPP. PSCA’s survey, which includes data for plan year 2013, covers a total of 613 profit sharing, 401(k), and combination 401(k)/profit sharing plans. Only 2 percent of plans included in the survey are profit sharing plans and therefore we determined the data are sufficiently representative of the 401(k) plan experience. PSCA, established in 1947, is a national, non- profit trade association of 1,200 companies and over six million plan participants. PSCA has conducted an annual survey of plans for nearly 60 years. As part of our approach for obtaining information on why plans use eligibility and vesting policies and participant understanding of these policies, we interviewed government officials and a total of 21 retirement professionals and academic researchers. We identified interviewees based on our prior work examining 401(k) plans and recommendations from initial interviewees. We selected interviewees who reflect a range of perspectives, from those with a focus on plan participants to those with a focus on plan sponsors. We selected federal government officials with a role in overseeing eligibility and vesting policies: officials of the Department of Labor, the Department of the Treasury, and the Internal Revenue Service. We also interviewed an official from the Securities and Exchange Commission, an agency that has a role in regulating the investment options into which plan participants direct their contributions. We categorized interviewees as retirement professionals if they provide retirement plan-related services (such as those who serve as consultants to plans), represent the interests of retirement plans or plan participants, or otherwise perform work in the retirement area. We categorized interviewees as academic researchers if they teach at an institution of higher education and focus on conducting scholarly research relevant to the retirement area. The views of those interviewed are not generalizable. Employee salary and retirement age. We assume that the worker starts working at age 18 in 2016, and is continuously employed through retirement at age 67 in 2065. We modeled lifetime earnings using medium scaled earnings factors developed by the Social Security Administration’s Office of the Chief Actuary (OCACT) (see table 8). These factors express hypothetical earnings at each age as a percent of the Social Security Administration’s national average wage index. The scaled factors are based on average work and earnings of actual insured workers over their careers. This approach has the advantage of reflecting actual earnings histories, with steeper wage growth earlier in early- and mid-career work years, and flatter wage growth in late-career work years. Nominal wage increases an average of 5.6 percent per year. However, this approach does not reflect the possibility that less-skilled workers and lower earners may have flatter wage growth over their lifetime than higher-skilled workers. For this reason, we ran an alternative scenario for low earners featuring a constant nominal wage growth of 4.6 percent (see table 9). This alternative scenario demonstrates that the loss of early savings has a bigger effect on total savings at retirement for workers with flatter earnings growth than for workers with steeper earnings growth. We assume a retirement in 2065 at age of 67 because that is the Social Security full retirement age for workers starting their careers in 2016. Inflation. To report adjusted salaries and other figures we indexed to 2016 dollars using the Consumer Price Index for Urban Wage Earners and Clerical Workers (CPI-W) projected under intermediate assumptions from the 2015 Old-Age, Survivors, and Disability Insurance (OASDI) Trustee’s report. Employee deferrals. We assume that the individual in our hypothetical projections makes contributions continuously through their continuous employment from age 18 to 66, except where contributions are suspended to illustrate the effect of an eligibility policy. PSCA survey data from the 2013 plan year show that the average pre-tax salary deferral by participants was 5.3 percent for non-highly compensated employees. We referred to data from the 2013 plan year in PSCA’s report summarizing results of its annual survey of 401(k) and profit sharing plans. Plans reported on policies for the 2013 plan year. The survey includes 613 defined contribution plans with 8 million participants representing a wide range of industries and plan sizes. The plans surveyed are 252 401(k) plans, 13 profit sharing plans, and 348 combination 401(k)/profit sharing plans. As noted in Appendix I, the PSCA population for 2013 includes a greater proportion of large plans based on participant numbers and assets and disproportionately represents the financial, insurance, and real estate industries when compared to the total population of plans, as measured by 2013 Form 5500 filings. The finance/insurance/real estate industry was the predominant industry reflected in the PSCA data (36 percent of plans were in this industry), while over half of the plans included in the Form 5500 filings (53 percent) represented the services industry. Employer matching contributions. We assume that the individual in our hypothetical projections receives employer contributions through their continuous employment from age 18 to 66, except where employer contributions are delayed or forfeited to illustrate the effect of an eligibility or vesting policy. We again referred to the PSCA survey data from the 2013 plan year. The average employer contribution among 401(k) plans is 2.90 percent, which we rounded to 3 percent for our hypothetical calculations. Similarly, the most commonly used match formula is 50 percent of employee deferrals up to 6 percent of salary (a max of 3 percent) used by 26.2 percent of plans. The next most common levels of employer contribution are a 100 percent match up to 4 percent of employee pay and a 100 percent employer match up to 5 percent, used by 10.9 percent and 9.9 percent of plans, respectively. The hypothetical projected amounts of retirement savings from employer contributions that is foregone from delayed eligibility and forfeited contributions from unmet vesting policies could be even larger if one assumed a higher, though not uncommon, level of employer matching contributions. We also assume that employer contributions are made on a per-pay-period basis, unless calculated otherwise for comparison. Returns. To set an annual return for each of the years of the worker’s career for our hypothetical scenarios we formulated a composite return based on Social Security Trustee projections. For the fixed income portion of the return, we used the Social Security Trustees’ projected annual trust fund real interest rate of 2.90 percent, and the projected consumer-price index for urban wage earners and clerical workers (CPI- W) of 2.70 percent, both published in the 2015 OASDI Trustee’s Report intermediate long range economic assumptions, for a nominal interest rate of 5.6 percent. For the equity portion of the return, we added an estimated long-term equity risk premium of 3.5 percentage points to the annual trust fund nominal interest rate, for a nominal rate of return on stocks of 9.1 percent. We changed the ratio slightly each year to reduce equities exposure and investment risk as the worker approaches retirement. The ratio corresponds to a “100 minus age” rule, which means that the percentage of assets invested in equities is set at 100 minus the worker’s current age. For example, when the worker is 50 years old, we assume that just half of their assets are invested in equities and assign the nominal return on stocks to just 50 percent of their retirement savings, while the remaining portion is calculated to earn the nominal interest rate on bonds. The “100 minus age” rule for portfolio diversification is relatively conservative, thus the projected value at retirement that we report is less than it would be if we had assumed a rule using a higher equities allocation. For example, another rule used for portfolio diversification is “120 minus age,” which would mean a higher equities allocation and higher projected returns, because over time equities tend to produce a higher average annual return than corporate bonds. A lower return assumption would result in lower projected savings lost or forfeited from the policies discussed in this report. Leakage and fees. We assumed no leakage from the worker’s account over time and did not apply any plan fees to the account balance. Both of these factors could decrease the rate of the account balance’s growth over time. In addition to the contact named above, Tamara Cross (Assistant Director), Angie Jacobs (Analyst in Charge), Sherwin Chapman, Katherine D. Morris, Rhiannon Patterson, and Stacy Spence made significant contributions to this report. Additional support was provided by Jessica Artis, Deborah Bland, Julianne Cutts, Laura Hoffrey, Saida Hussain, Gene Kuehneman, Jill Lacey, Sheila McCoy, Mimi Nguyen, Dae Park, Joe Silvestri, Frank Todisco, Walter Vance, Kate Van Gelder, Adam Wendel, Jill Yost, and Chris Zbrozek. Retirement Security: Federal Action Could Help State Efforts to Expand Private Sector Coverage. GAO-15-556. Washington, D.C.: September 10, 2015. Retirement Security: Most Households Approaching Retirement Have Low Savings. GAO-15-419. Washington, D.C.: May 12, 2015. 401(k) Plans: Greater Protections Needed for Forced Transfers and Inactive Accounts. GAO-15-73. Washington, D.C.: November 21, 2014. Private Pensions: Pension Tax Incentives Update. GAO-14-334R. Washington, D.C.: March 20, 2014. Retirement Security: Women Still Face Challenges. GAO-12-699. Washington, D.C.: July 19, 2012. 401(K) Plans: Increased Educational Outreach and Broader Oversight May Help Reduce Plan Fees. GAO-12-325. Washington, D.C.: April 24, 2012. Private Pensions: Some Key Features Lead to an Uneven Distribution of Benefits. GAO-11-333. Washington, D.C.: March 30, 2011. Private Pensions: Low Defined Contribution Plan Savings May Pose Challenges to Retirement Security, Especially for Many Low-Income Workers. GAO-08-8. Washington, D.C.: November 29, 2007. Private Pensions: Changes Needed to Provide 401(k) Plan Participants and the Department of Labor Better Information on Fees. GAO-07-21. Washington, D.C.: November 16, 2006.
ERISA allows sponsors to opt to set up 401(k) plans—which are the predominant type of plan offered by many employers to promote workers' retirement savings—and to set eligibility and vesting policies for the plans. GAO was asked to examine 401(k) plans' use of these policies. Among other objectives, this report examines 1) what is known about the prevalence of these policies and why plans use them, and 2) the potential effects of these policies on workers' retirement savings. GAO conducted a nongeneralizable survey of 80 plan sponsors and plan professionals regarding plans' use of eligibility and vesting policies and the reasons for using them; reviewed industry data on plans' use of eligibility and vesting policies; and projected potential effects on retirement savings based on hypothetical scenarios. GAO also interviewed federal officials and 21 retirement professionals and academic researchers. GAO's nongeneralizable survey of 80 401(k) plans ranging in size from fewer than 100 participants to more than 5,000 and its review of industry data found that many plans have policies that affect workers' ability to (1) save in plans (eligibility policies), (2) receive employer contributions, and (3) keep those employer contributions if they leave their job (vesting policies). Thirty-three of 80 plans surveyed had policies that did not allow workers younger than age 21 to participate in the plan. In addition, 19 plans required participants to be employed on the last day of the year to receive any employer contribution for that year. Fifty-seven plans had vesting policies requiring employees to work for a certain period of time before employer contributions to their accounts are vested. Plan sponsors and plan professionals GAO surveyed identified lowering costs and reducing employee turnover as the primary reasons that plans use these policies. The Employee Retirement Income Security Act of 1974 (ERISA) allows plan sponsors to set eligibility and vesting policies. Specifically, federal law permits 401(k) plan sponsors to require that workers be at least age 21 to be eligible to join the plan. The law also permits plans to use rules affecting 401(k) plan participants' receipt of employer contributions and the vesting of contributions already received. However, over time workers have come to rely less on traditional pensions and more on their 401(k) plan savings for retirement security. Further, while the rules were designed, in part, to help sponsors provide profit sharing contributions, today 401(k) plan sponsors are more likely to provide matching contributions and today's workers may be likely to change jobs frequently. GAO's projections for hypothetical scenarios suggest that these policies could potentially reduce workers' retirement savings. For example, assuming a minimum age policy of 21, GAO projections estimate that a medium-level earner who does not save in a plan or receive a 3 percent employer matching contribution from age 18 to 20 could have $134,456 less savings by their retirement at age 67 ($36,422 in 2016 dollars). Saving early for retirement is consistent with Department of Labor guidance as well as previous legislation and allows workers to benefit from compound interest, which can grow their savings over decades. In addition, the law permits plans to require that participants be employed on the last day of the year to receive employer contributions each year, which could reduce savings for today's mobile workforce. For example, GAO's projections suggest that if a medium-level earner did not meet a last day policy when leaving a job at age 30, the employer's 3 percent matching contribution not received for that year could have been worth $29,297 by the worker's retirement at age 67 ($8,150 in 2016 dollars). GAO's projections also suggest that vesting policies may also potentially reduce retirement savings. For example, if a worker leaves two jobs after 2 years, at ages 20 and 40, where the plan requires 3 years for full vesting, the employer contributions forfeited could be worth $81,743 at retirement ($22,143 in 2016 dollars).The Department of Treasury (Treasury) is responsible for evaluating and developing proposals for legislative changes for 401(k) plan policies, but has not recently done so for vesting policies. Vesting caps for employer matching contributions in 401(k) plans are 15 years old. A re-evaluation of these caps would help to assess whether they unduly reduce the retirement savings of today's mobile workers. GAO suggests Congress consider a number of changes to ERISA, including changes to the minimum age for plan eligibility and plans' use of a last-day policy. GAO is also making two recommendations, including that Treasury reevaluate existing vesting policies to assess if current policies are appropriate for today's mobile workforce. Treasury had no comment on the recommendation. GAO believes that such an evaluation would be beneficial, given the potential for vesting policies to reduce retirement savings.
A credit rating is an assessment of the creditworthiness of an obligor as an entity or in relation to specific securities or money market instruments. SEC first used the term “Nationally Recognized Statistical Rating Organization” in 1975 to describe those rating agencies whose ratings could be relied upon to determine capital charges for different types of debt securities (securities) broker-dealers held. Since then, SEC has used the NRSRO designation in a number of regulations, and the term has been embedded in numerous federal and state laws and regulations, investment guidelines, and private contracts. As will be discussed, SEC has issued a series of proposals regarding the removal of references to credit ratings in its regulations in accordance with the Dodd-Frank Act. NRSRO credit ratings are intended to measure the likelihood of default for an issue or issuer, although some also measure variables such as the expected value of dollar losses given a default. The NRSROs describe ratings as being intended only to reflect credit risk, not other valuation factors such as liquidity or price risk. To determine an appropriate rating, analysts at rating agencies use publicly available information and market and economic data, and may hold discussions and obtain nonpublic information from the issuer. Issuers seek credit ratings for reasons such as improving the marketability or pricing of their securities or satisfying investors, lenders, or counterparties. Institutional investors, such as mutual funds, pension funds, and insurance companies, are among the largest owners of debt securities in the United States and are substantial users of credit ratings. Institutional investors may use credit ratings as one of several inputs to their internal credit assessments and investment analyses, or to identify pricing discrepancies for their trading operations. Broker-dealers also use ratings to recommend and sell securities to their clients or determine acceptable counterparties, and collateral levels for outstanding credit exposures. CRARA established SEC oversight of credit rating agencies registered as NRSROs. Specifically, CRARA added section 15E to the Securities Exchange Act of 1934 to provide SEC with examination authority and establish a registration program for credit rating agencies seeking NRSRO designation. SEC adopted final rules for a formal registration and oversight program for NRSROs in June 2007. SEC amended several of these rules in February and December 2009 with the goal of further increasing transparency of NRSRO rating methodologies, strengthening the disclosures of ratings performance, prohibiting NRSROs from engaging in certain practices, and enhancing NRSRO record keeping. Since the implementation of CRARA, SEC has registered 10 credit rating agencies as NRSROs. One of these credit rating agencies has recently withdrawn from registration as an NRSRO.pays compensation model and three operate primarily under the subscriber-pays compensation model, in which users pay a subscription fee to the NRSRO for access to its ratings. Despite the growth in the number of NRSROs and the availability of credit ratings from NRSROs operating under a subscriber-pays model, the market remains highly concentrated. In 2011, SEC reported that the three largest NRSROs (Standard & Poor’s, Moody’s Investment Services, and Fitch Ratings) issued approximately 97 percent of all outstanding ratings. Furthermore, NRSROs operating under the issuer-pays model issued approximately 99 percent of the total currently outstanding NRSRO credit ratings. Economists note that the credit rating industry has exhibited a high level of concentration throughout much of its history. SEC and others have noted that the regulatory use of ratings, economies of scale, high fixed costs, and network effects (the value or utility of products or services increasing with the number of users) as factors that have created barriers to entry and led to concentration in the credit rating industry. In our 2010 report, we identified five proposed models—random selection, investor-owned credit rating agency, stand-alone, designation, user-pays—and summarized key features of these proposed models. When we conducted our 2010 study, the level of development for each model varied and none had been implemented. Our current study identified two additional proposed models—the alternative user-pays model and the issuer and investor-pays model—and found that little additional work had been completed on the previously identified models that would provide further details about how each would function. According to some of the authors of these models, there is little incentive to continue developing these models, as the issue of alternative compensation models for NRSROs and their possible implementation appears unlikely to receive much, if any, attention from regulators or legislators. For example, these authors told us that SEC had not reached out to them to further discuss these models as part of the 939F study. However, SEC did solicit public comments about the models through a public notice in conducting its 939F study. Furthermore, SEC staff said that they held follow-up discussions with the authors of some of the models after the 2009 SEC roundtable. While these models generally are intended to address the conflict of interest in the issuer-pays model, some comment letters to SEC for its section 939F study described a number of perceived disadvantages of these models. None of the models has been implemented as of January 2012. As of January 2012, we have identified seven proposed alternative compensation models. The following summarizes the key features of each of these proposed models. A ratings clearinghouse randomly would select NRSROs to rate a new issuance in this proposed model. The clearinghouse could be a nonprofit, a governmental agency such as SEC, or a private-public partnership that would design the criteria by which new entrants could qualify as a credit rating agency. All issuers or sponsors that wanted ratings for their issuances would request them from the clearinghouse, which would use a random number generator to assign an NRSRO registered in the relevant asset class to produce the rating. The clearinghouse would notify the NRSRO of the opportunity to rate the issuance and provide basic information on the type of issuance but not the issuer’s name. Not until the NRSRO agreed to complete the rating would the clearinghouse identify the issuer and details of the issuance. If the selected NRSRO agreed to rate the issuance, the issuer would pay a fee to the clearinghouse. The issuer also would pay to cover clearinghouse costs on top of those required to rate the security. Upon completion of the initial and maintenance ratings, the clearinghouse would distribute the fees to the NRSRO. The clearinghouse would set the ratings fees for the NRSRO depending on the type of security issued, but the letter rating would be free of charge to the public. The proposed model incorporates a peer comparison review to create an incentive for NRSROs to produce quality ratings. As part of this review, the clearinghouse would evaluate the performance of all NRSROs on the basis of two empirical tests. For instance, if the default percentage of debt instruments rated by a given NRSRO differed from the default percentage of its peers by a set parameter, then the NRSRO would be subject to sanctions such as losing a percentage of business or rating fees. A second test would compare annual yields of identically rated debt securities from different asset classes. Securities in different asset classes that are rated similarly should have the same yield. An NRSRO would be subject to sanctions if the yields of identically rated securities differed by a certain threshold. According to the author of this proposed model, by eliminating the linkage between the NRSRO and the issuer, this model would eliminate the conflict of interest stemming from the issuer-pays model. Furthermore, the author stated that the peer comparison review coupled with economic sanctions for poor performance would motivate the NRSROs to continually adjust their models and produce quality ratings. Under this model, sophisticated investors—referred to as “highly sophisticated institutional purchasers” in the model—would create and operate an NRSRO that would produce ratings. Issuers would have to obtain two ratings—one from the investor-owned NRSRO and the second from their choice of NRSRO. More specifically, an NRSRO could not publicly release a rating for which an issuer or sponsor paid unless the NRSRO received written notification that the issuer had paid an investor- owned NRSRO to publicly release its rating. The investor-owned NRSRO would publish its rating on or before the date on which the solicited NRSRO published its rating. Institutional investors would have to qualify as highly sophisticated institutional purchasers before forming or joining an investor-owned agency. To qualify, an institutional investor would have to demonstrate that it was large and sophisticated, managed billions of dollars in assets, and could be relied upon to represent the buy-side interest in accurately rating debt market instruments. The investor-purchasers would hold majority voting and operational control over the agency, which could be for-profit or not-for-profit. Market forces would set the agency fees, which likely would be comparable to fees currently charged by dominant NRSROs. The letter rating and the underlying research would be free to the public. Proponents of this model believe that it would improve the rating process by changing incentive structures. They said that investor-owned agencies would introduce new competition to the industry and balance the investors’ interests against issuers’ interests. NRSROs would only be permitted to produce credit ratings in this proposed model. They could interact with and advise organizations being rated, but could not charge fees for advice. Instead of receiving issuer fees, the NRSROs would be compensated through transaction fees for original issuance and secondary market transactions. The issuer or secondary-market seller would pay part of the fee, and the investor purchasing the security (in the primary or secondary market) would pay the other part. The NRSRO would be compensated over the life of the security based on these transaction fees. The letter rating would be free to the public. Proponents of this model believe that by creating a funding source beyond the influence of both issuers and investors, NRSROs would focus on producing the most accurate and timely credit analysis rather than on satisfying the desires of any other vested interest. In this proposed model, all NRSROs could opt to rate a new issuance and security holders would direct, or designate, fees to the NRSROs of their choice. When an issuer brought a security to market, it would have to provide all interested NRSROs with the information to rate the issuance and pay rating fees to a third-party administrator, which would manage the designation process. The investors that purchased the debt issuance would each designate one or several NRSROs that rated the security to receive fees, based on their perception of research underlying the ratings. The third-party administrator would disburse the fees in accordance with the designations. After the initial rating, the issuer would continue to pay maintenance rating fees to the third-party administrator. A final rating fee would be paid in conjunction with the retirement (or repurchase) of the security. The letter rating would be free to the public, while the research underlying it would be distributed to security holders and (at the discretion of the relevant NRSROs) to potential security holders. The authors of this proposed model said it would eliminate conflicts of interest resulting from issuers paying for ratings and increase competition by allowing all NRSROs access to the information necessary to rate any issuance. The authors also stated that this model encourages NRSROs to prepare ratings because each NRSRO that did could profit from its ratings to the extent investors or other users find the ratings useful. Issuers would not pay for ratings under this proposed model; rather, all users of ratings would enter into a contract with an NRSRO and pay for rating services. The proposal defines “user” as any entity that included a rated security, loan, or contract as an element of its assets or liabilities as recorded in an audited financial statement. For example, users could be holders of long or short positions in a fixed-income instrument, parties that refer to a credit rating in contractual commitments (that is, as parties to a lease), or parties to derivative products that rely on rated securities or entities. A user would have to pay for ratings services supplied during each period in which it booked the related asset or liability. The proposed model relies on third-party auditors to ensure that NRSROs receive payment for their services from users of ratings. The user would have to demonstrate to the auditors that the holder of a rated instrument or contract paid for the rating services. Until auditors were satisfied that NRSROs had been properly compensated, they would not issue audit opinions. The model would require the close cooperation of the auditing community and the Public Company Auditing Oversight Board. The authors of this model stated that, while more cumbersome, the model attempts to capture “free riders”—those users of ratings that do not compensate NRSROs for the use of their intellectual property—and requires them to pay for ratings. The alternative user-pays model would pool creditors’ resources to secure ratings before debt was issued. A government agency or independent board would administer a user-fee system financed by debt purchasers, which would fund a competitive bidding process for the selection of rating agencies. The agency or board would solicit ratings before the debt issuance and then pay for the expense and related administrative costs through the user fee. The user fee could be assessed through a flat fraction of a percentage fee on the initial purchasers of debt offerings. The user fee would allow the agency or board to finance initial ratings on a rolling basis, with the ratings for a given debt issuance being secured before the issuance of the debt. Although the fee could be assessed in many ways, the author of the model suggests a one-time fee at initial sale for administrative ease. NRSROs would bid on the right to issue ratings with the agency or board determining how best to judge the bids and award the right to rate the issuance. For example, the agency or board could weigh factors such as price, extent of diligence the NRSRO proposed to undertake, and the disclosures the NRSRO would demand from issuers as a condition for the rating. The author believes the bidding process would serve to contain the costs for ratings through price competition, level the playing field for smaller competitors and new entrants, and balance the desire for market- based assessments of risk with a greater role for the government agency, such as SEC, or an independent board in defining rating agencies’ responsibilities. According to its author, this user-fee model creates additional accountability mechanisms. Users of ratings would be given enforceable rights and would require NRSROs to assume certification and mandatory reporting duties to creditors. The system would set up creditor committees that would serve as a channel for creditors to monitor ratings and assert limited rights against NRSROs. If an NRSRO breached duties owed to the creditors, the committee would serve as the representative in any potential actions and preempt actions brought by individual creditors. The model would require that all contracts with NRSROs detail duties owed to their creditors, to delineate the potential liability exposure for breach of these duties, and channel adjudication of any disputes to an SEC administrative process. For example, NRSROs could be required to certify on a quarterly basis that they exercised reasonable care in conducting due diligence of issuers’ financial and nonfinancial disclosures to make accurate assessments of risk exposure. To provide NRSROs with incentives for compliance without jeopardizing their financial viability, the model would limit NRSRO financial liability to cases of gross negligence, coupled with an earnings-based cap on liability and other safeguards. This proposed model incorporates characteristics from a number of the models described earlier and leverages an existing structure as the basis for collecting and distributing ratings fees. Under the proposed issuer and investor-pays model, accredited NRSROs would be assigned to rate new issuances. Initially, all NRSROs would be placed in a continuous queue and would receive rating assignments when their respective numbers came up, unless they were unable or unwilling to rate a particular issue. In the future, ratings would be assigned based on the performances of the NRSROs, with those agencies that produced superior performance receiving more assignments. Performance would be measured as the correlation between an NRSRO’s ratings and default and recovery rates on issues rated, and tracked using a common, transparent, and defensible methodology. To help ensure rigor and fairness, at least two and possibly three NRSROs would be assigned to rate each issuance. Payments for ratings would come from a fee levied on issuers of new debt issues and investors as parties of secondary market trades. These fees would be deposited in a dedicated fund—the U.S. Ratings Fund— and would be determined and reset periodically. The periodic review would consider the historic and projected volumes of primary issuances and of secondary market trading to determine a fee that would in the aggregate allow the ratings business to attract and retain qualified individuals. This fund would be modeled after the Municipal Securities Rulemaking Board (MSRB), which is authorized to collect fees on new and secondary market municipal issues to fund its activities, and would be overseen by a governing board representing issuers, investors, rating agencies, intermediaries, and independent directors. The fees collected would be used to pay the selected accredited NRSROs for issuing each solicited rating and other necessary administrative activities such as tracking NRSRO’s performance and tracking deals to be rated. The authors note that these other activities could be outsourced or performed by the U.S. Ratings Fund. The Fund also would advise SEC on the eligibility and accreditation of the NRSROs. All ratings and related research reports paid for through the U.S. Ratings Fund would be freely available to the public. According to the authors, NRSROs would have incentives to provide accurate ratings and be objective because ratings would be monitored by a regulator and the accreditation of NRSROs would be subject to periodic renewal. The authors also note that legislation likely would be required to set up the new rating agency compensation model. Specifically, the authors said that legislation would need to enumerate the functions and the governance structure of the U.S. Ratings Fund, provide its mandate and methodology for determining the fees to be charged for ratings, and elaborate on how the new rating model would be introduced. During debate on the Dodd-Frank Act, a system similar to the random selection model was proposed through an amendment to the Securities Exchange Act of 1934. Proposed section 939D would have added a section 15E(w) to the Exchange Act which would require SEC to establish a Credit Rating Agency Board that was a self-regulatory organization subject to SEC’s oversight. The Board would determine NRSROs that are eligible to issue initial credit ratings for structured finance products and assign NRSROs to rate the issuances (NRSROs could decline). The method for selecting the qualified NRSROs would be based on a Board evaluation of alternatives designed to reduce the conflicts of interest under the issuer-pays model, including a lottery or rotational assignment system. Although the section 939D amendment was passed by the Senate, it was not included in the final legislation. However, the Dodd- Frank Act provides that upon completion of the section 939F study, SEC shall, as it determines is necessary or appropriate in the public interest or for the protection of investors, establish by rule a system for the assignment of NRSROs to determine the initial credit ratings and monitor the credit ratings of structured finance products in a manner that prevents the arranger from selecting the NRSRO that will determine the credit rating. In issuing any rule, the act requires SEC to give thorough consideration to the provisions of section 15E(w) of the Exchange Act, as that provision would have been added by section 939D as passed by the Senate on May 20, 2010, and SEC must implement the system described in such section 939D unless SEC determines that an alternative system would better serve the public interest and the protection of investors. In May 2011, SEC requested that interested parties provide comments on whether any potential alternative compensation model, including four of the models we described in our 2010 report and discussed previously would provide a reasonable alternative to the section 15E(w) model in terms of objectives and goals. SEC omitted the random selection model from its request for comment because it is similar to the section 15E(w) model. As part of this solicitation of comments, SEC requested that interested parties use the evaluative framework we developed for our 2010 report to evaluate the section 15E(w) and other alternative compensation models. The comment period ended in September 2011. Our analysis of the comment letters that various market participants and observers submitted on SEC’s section 939F study found that while some supported implementing the section 15E(w) model, others preferred enhancing existing SEC rules. Of the 30 comment letters submitted, our assessments found that 11 generally favored implementing an alternative compensation model, 13 opposed the implementation of an alternative compensation model, and 5 did not comment on the need for an alternative compensation model. Sixteen comment letters either supported or made suggestions for improving existing SEC rules. None of the comment letters supported any of the other alternative compensation models described by SEC in its request for comment.addressed the alternative models individually and all were critical of these alternatives. Only the section 15E(w) model received specific support from those that supported the implementation of an alternative compensation model. Generally, these letters highlighted the need to address the conflict of interest inherent in the issuer-pays model. For example, one commenter stated that an assignment system—such as the one proposed in the model—best serves the public interest by increasing competition to allow for new NRSRO participants. The author of another comment letter stated that on balance he favored a system—such as the one proposed in the 15E(w) model—that would separate issuer payment for ratings on structured finance products from issuer selection of NRSROs. Those opposed to the implementation of an alternative compensation model, including the section 15E(w) model, cited concerns such as replacing one set of conflicts of interest with another and raised issues about the cost of implementation. According to a few comment letters, each of the proposed models presents its own unique set of issues and often substitutes one type of conflict of interest for another. For example, one comment letter stated that each compensation model has unavoidable conflicts of interest and that none of the alternatives presented by SEC would offer practical or effective solutions to the risks of potential conflicts engendered by the issuer-pays model. Another comment letter cited specific conflicts various market participants may have, concluding that changing “who pays” the credit rating agency will not eliminate the potential for conflicts: it will only shift the conflicts from one set of interested parties to another. Comment letters also stated that some of the models would create large costs. For example, one comment letter stated that the selection board created in the section 15E(w) model would need to employ a significant staff with highly specialized skills to credibly carry out its responsibilities. Another letter described the extensive amount of infrastructure that would be needed to assess fees on each trade, such as those required by the stand-alone model. Rule 17g-5 requires an NRSRO hired to determine initial credit ratings for structured finance products to maintain a password-protected Internet website containing a list of each such structured finance product for which it currently is in the process of determining an initial credit rating. The rule is designed to make it more difficult for arrangers to exert influence over the NRSRO they hire because any inappropriate rating could be exposed to the market through the unsolicited ratings issued by NRSROs not hired to rate the structured finance product. However, the rule limits the number of times an NRSRO can access the information without having to produce its own credit ratings. An NRSRO that accesses information 10 or more times during the calendar year must produce a credit rating for at least 10 percent of the issues for which it accessed information. See 17 C.F.R. § 240.17g-5(a)(3). information provided to the selected NRSRO to be made more broadly available, particularly to investors. The Dodd-Frank Act requires SEC to take a number of actions regarding its oversight of NRSROs including issuing a number of rulemakings, establishing an Office of Credit Ratings, and studying, among other things, the feasibility of an assignment system for the ratings of structured finance products and alternative means for compensating NRSROs. As part of this study, SEC has solicited comment on its authority to implement the alternative compensation models. Since the enactment of the Dodd-Frank Act in July 2010, SEC has taken a number of steps to implement the parts of the act pertaining to NRSROs. Of the 15 requirements for SEC contained in Title IX of the Dodd-Frank Act, 9 require SEC to issue rules. As of January 2012, SEC has adopted three final rules that implement all or part of certain requirements and proposed rules to implement the remaining requirements. Specifically, SEC has adopted rules removing the exemption for NRSROs from the Fair Disclosure Rule; requiring NRSROs to include a report accompanying a credit rating for an asset-backed security describing representations, warranties, and enforcement mechanisms available to investors; and removing references to NRSRO credit ratings from certain securities registration requirements. SEC has also proposed a number of amendments to existing rules or new rules to implement the remainder of the Dodd-Frank Act requirements applicable to NRSROs. Table 1 provides a summary of these proposals. In addition to its work on NRSRO oversight rules, SEC continues to work on other Dodd-Frank Act requirements related to NRSROs. These requirements include completing four studies on various aspects of the credit rating industry. As of January 2012, SEC has completed one of the four studies and issued a report which provided a summary of SEC regulations requiring the use of an assessment of the creditworthiness of a security or money market instrument and any references to or requirements in such regulations regarding credit ratings. According to SEC’s website, SEC plans to complete two of the remaining three studies by July 2012; the completion date for the other study has yet to be determined. Finally, the Dodd-Frank Act requires SEC to establish an Office of Credit Ratings and complete annual examinations of each NRSRO. Once established, this office will be responsible for administering the rules of SEC in certain areas, promoting accuracy in credit ratings, and conducting annual examinations of each NRSRO. Although the office has yet to be established, NRSRO examination staff from SEC’s Office of Compliance Inspections and Examinations (OCIE), staff from OCIE’s investment adviser/investment company and broker-dealer examination groups, and NRSRO specialists from SEC’s Division of Trading and Markets recently completed the first cycle of annual examinations of each NRSRO as required by the Dodd-Frank Act. SEC made public a staff report summarizing the examinations in September 2011. According to this report, limited SEC resources required that this year’s examinations focus on reviewing the areas mandated by section 15E(p)(3)(B)— specifically, whether the NRSRO conducts business in accordance with its policies, procedures, and rating methodologies; management of conflicts of interest; implementation of ethics policies; internal supervisory controls; governance; activities of the designated compliance officer; processing of complaints; and NRSRO’s policies governing the post- employment activities of former staff of the NRSRO. The report summarized the examination staff’s notable observations and concerns and the recommendations the staff made to each NRSRO about these observations and concerns related to the following required review areas: conducting business in accordance with policies, procedures, and methodologies; management of conflicts of interest; internal supervisory controls; and designated compliance officer activities. The report notes that, as of the date of the report, SEC had not determined that any finding constituted a “material regulatory deficiency,” but noted that the Commission may do so in the future. The staff also made the observation that NRSROs appear to be trending even more toward employing the issuer-pays business model, noting that two of the subscriber-pays NRSROs recently have taken steps to focus more on issuer-pays business, particularly with respect to ratings of asset-back securities. Section 939F of the Dodd-Frank Act requires that SEC conduct a study that addresses, among other things, the feasibility of establishing a system in which a public or private utility or an SRO assigns NRSROs to determine the credit ratings of structured finance products. SEC’s report on the study, due by July 21, 2012, must include any recommendations for regulatory or statutory changes that SEC determines should be made to implement the findings of the study. As part of this study, SEC solicited comment on its authority to implement various alternative compensation models. According to SEC staff, the staff is reviewing the comment letters received and is evaluating authority issues. Section 939F requires that, after submission of the report to Congress resulting from the study, SEC shall, by rule, as the SEC determines is necessary or appropriate in the public interest or for the protection of investors, establish a system for the assignment of NRSROs to determine the initial credit ratings of structured finance products, in a manner that prevents the issuer, sponsor, or underwriter of the structured finance product from selecting the NRSRO that will determine the initial credit ratings and monitor such credit ratings. In issuing any rule, SEC is required to give through consideration to the provisions of section 15E(w) of the Securities Exchange Act of 1934, as that provision would have been added by section 939D of H.R. 4173 (111th Congress), as passed by the Senate on May 20, 2010, and shall implement the system described in such section 939D unless the Commission determines that an alternative system would better serve the public interest and the protection of investors. The need for any statutory changes likely will depend on the system that SEC decides to implement. Therefore, obtaining as complete information on the models as available, such as by consulting with the models’ authors, will be important for SEC to fully assess each model in order to make its decision and any recommendations for statutory changes SEC determines should be made to implement its findings. As part of its request for public comment in connection with its study, SEC requested comment on whether the securities laws provide SEC with authority to implement the 15E(w) system and the four other models it outlined. In particular, SEC asked for comment on whether, in terms of legal feasibility, the role of SEC in overseeing the Credit Rating Agency Board raised legal issues. Few comment letters directly addressed SEC’s questions concerning its authority to implement one or any of the alternative compensation models outlined in its request for comment. However, a few of the comment letters discussed potential legal questions surrounding the implementation of or rulemaking for specific aspects of certain models, or general constitutional questions. For instance, two commenters argued that in their view, NRSRO ratings legally are viewed as opinions and may not be subject to being proven true or false. Thus any system that utilizes the accuracy of ratings as criteria to determine which NRSROs would be eligible to rate certain categories of securities—such as the 15E(w) system—may face legal challenges. One comment letter also argued that any system aimed at defining “quality” ratings could run afoul of Section 15E(c)(2) of the Exchange Act, which provides that SEC may not “regulate the substance of credit ratings or the procedures and methodologies by which any NRSRO determines credit ratings.” The letter states that any decision by SEC that an NRSRO’s ratings (and, by extension, the criteria and methodologies by which those ratings were formed) lack “quality” and therefore must be changed to maintain participation in the proposed system could well violate this provision. In addition, three comment letters raised constitutional questions. Two comment letters questioned how certain of the alternative models might affect an NRSRO’s right to form and publish opinions under the First Amendment. Another questioned the ability of the government to force one private party to deal with another private party of the government’s choosing in a private business transaction, which the commenter argues would occur if SEC implemented the section 15E(w) model. The model authors also hold varying opinions on the extent to which statutory changes would be necessary to implement their alternative compensation model. For example, in their paper introducing the model and in our discussions with them, the authors of the proposed issuer and investor-pays model discussed that they anticipate that legislation would likely be required to implement their proposed model. Specifically, authors of the model stated that legislation would likely be necessary to establish the self-regulatory organization and provide it with the authority to create a fund to collect fees and impose data collection requirements on issuances, the governance structure of the fund, the methodology for determining fees, an initial rotating system of assignments of issues to be rated, and broad parameters for incentive compensation. Alternatively, the author of the proposed investor-owned credit rating agency model we interviewed believes that current law, even before the Dodd-Frank Act was passed, provides SEC with the authority to implement the model as a means of managing the conflicts of interest generated by the issuer-pays model. Specifically, the author points to sections 15E(h)(2) of the Exchange Act, which grants SEC the authority to issue rules to prohibit, or require the management and disclosure of, any conflicts of interest relating to the issuance of credit ratings by the NRSRO. This includes the authority to issue rules relating to the manner in which an NRSRO is compensated by the issuer for issuing credit ratings. The author also stated that section 15E(i)(1), which provides that SEC shall issue rules to prohibit any act or practice relating to the issuance of credit ratings by an NRSRO that SEC determines to be unfair, coercive, or abusive provides additional statutory authority for the implementation of the investor-owned credit rating agency model. As previously discussed, SEC has not spoken to the authors of the proposed models to solicit additional details about their models—information that could help inform SEC’s analysis of the alternative compensation models and its report to Congress containing any recommendations for regulatory or statutory changes that it determines should be made to implement the findings of its study. In recent years, academic researchers and industry experts have begun to develop a number of alternative compensation models for credit rating agencies in response to concerns about conflict of interest, ratings integrity, and competition. As of January 2012, none of these models have been fully developed, and given that NRSROs continue to primarily use the issuer-pays, and to a lesser extent, the subscriber-pays models, the use of any alternative model or models would likely have to be at the direction of SEC or Congress. As directed by section 939F of the Dodd- Frank Act, SEC is currently studying, among other things, alternative means for compensating NRSROs that would create incentives for accurate credit ratings. As part of its study, SEC solicited public comment on various alternative compensation models and whether it has sufficient authority to implement these models. Few of the comment letters SEC received specifically addressed the alternative models or SEC’s authority to implement them, and only one of the model authors submitted a comment letter to SEC. Currently, the staff is reviewing the comment letters received and evaluating authority issues, however the extent to which SEC’s existing authorities would allow it to implement any of the alternative models by rule largely will depend on the system selected. As part of its 939F study, SEC has not met with the authors of the various alternative compensation models to discuss the models in greater detail. Doing so could help ensure that SEC has thoroughly explored all of the available options in sufficient enough detail to adequately consider them. Without consulting the authors to gain a comprehensive understanding of the proposed models, SEC may not have complete information available to be able to fully determine the authorities it may need to implement a particular model. As SEC continues to study the various alternative means for compensating NRSROs, as well as determine whether a system for the assignment of initial credit ratings for structured finance products is necessary or appropriate in the public interest or for the protection of investors, SEC should consult with the authors to better ensure it has all available information on the models to make its decision, and include in its report to Congress any recommendations for statutory changes the SEC determines should be made to implement the findings of the study. We provided a draft of the report to the Chairman of the Securities and Exchange Commission (SEC) for her review and comment. SEC’s written comments are reprinted in appendix II. We also received technical comments from SEC that were incorporated, where appropriate. In its written comments, SEC agreed with our recommendation. In describing its statutory responsibilities and the steps it has taken to implement them, SEC noted that as it continues working on its 939F study, the Commission staff will seek to consult further with the parties that have proposed alternative compensation models for NRSROs to better ensure that the Commission has all available information on such models. We are sending copies of this report to SEC, appropriate congressional committees and members, and other interested parties. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or clowersa@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To assist Congress and others in assessing these proposed alternative compensation models, we developed an evaluative framework for our 2010 report with seven factors that any compensation model should address to be fully effective. The framework can help identify a model’s relative strengths and weaknesses, potential trade-offs (in terms of policy goals), or areas in which further elaboration or clarification would be warranted using the following factors: Independence. The ability for the compensation model to mitigate conflicts of interest inherent between the entity paying for the rating and the nationally recognized statistical rating organization (NRSRO). Accountability. The ability of the compensation model to promote NRSROs’ responsibility for the accuracy and timeliness of their ratings. Competition. The extent to which the compensation model creates an environment in which NRSROs compete for customers by producing higher-quality ratings at competitive prices. Transparency. The accessibility, usability, and clarity of the compensation model and the dissemination of information on the model to market participants. Feasibility. The ease and simplicity with which the compensation model can be implemented in the securities market. Market acceptance and choice. The willingness of the securities market to accept the compensation model, the ratings produced under that model, and any new market players established by the compensation model. Oversight. The evaluation of the model to help ensure it works as intended. See GAO-10-782 for more detailed descriptions of the seven factors. In addition to the individual named above, Karen Tremba (Assistant Director), Rachel DeMarcus, Patrick Dynes, Matthew Keeler, Patricia Moye, Barbara Roesmann, and Jessica Sandler made key contributions to this report.
Over the past decade, concerns repeatedly have been raised about the accuracy of credit ratings provided by a number of nationally recognized statistical rating organizations (NRSRO). NRSRO critics often point to the conflict of interest created by the industry’s predominant compensation model in which issuers of securities pay the rating agencies for their ratings (issuer-pays model). In 2006, Congress established Securities and Exchange Commission (SEC) oversight over NRSROs, and recently enhanced this authority through the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act). This act also requires GAO to study alternative means for compensating NRSROs. This report discusses (1) alternative models for compensating NRSROs and (2) SEC’s actions to implement the act’s requirements specific to its oversight of NRSROs. To do this work, GAO leveraged its 2010 report on NRSROs (GAO-10-782), reviewed comment letters submitted to SEC as part of its study of alternative compensation models, proposed and finalized rules issued under the act; and interviewed SEC staff and authors of alternative compensation models . As of January 2012, GAO identified seven alternative models for compensating NRSROs (see table below). These models generally were designed to address the conflict of interest in the issuer-pays model, better align the NRSROs’ interest with users of ratings, or improve incentives NRSROs have to produce reliable and high-quality ratings. However, the amount of detail currently available for each model varies and none has been implemented. According to some of the authors of the models, there is little incentive to continue developing these models because it appears unlikely they will receive attention from regulators or legislators. For example, these authors noted that SEC had not reached out to them to further discuss these models as part of its ongoing study of alternative compensation models for credit rating agencies. During debate on the Dodd-Frank Act, a model similar to the random-selection model was proposed through an amendment that would have added a section 15E(w) to the Securities Exchange Act of 1934 (15E(w) model). Although the amendment was not included in the final legislation, section 939F of the Dodd- Frank Act requires SEC to study, among other things, alternative means for compensating NRSROs. It also authorizes SEC to, upon completion of the study, establish by rule a system for assigning NRSROs to determine initial credit ratings and monitor the ratings of structured finance products in a manner that prevents the arranger from selecting the NRSRO that will determine the credit rating should SEC conclude that an alternative system is necessary or appropriate. In issuing any rule, SEC also must give thorough consideration to the section 15E(w) model and implement the model unless it determines that an alternative would better serve the public interest and protect investors. As part of its solicitation of comments for its ongoing study of alternative compensation models, SEC requested that interested parties use the framework GAO developed in the 2010 report on NRSROs to evaluate the section 15E(w) and other models. GAO created this evaluative framework to help identify the relative strengths and weaknesses and potential trade offs (in terms of policy goals) of the models. Based on GAO’s analysis of comment letters to SEC, while a number of comment letters generally favored implementing the section 15E(w) model, slightly more opposed the implementation of any of the models. Those supporting the 15E(w) model highlighted the need to address the conflict of interests inherent in the issuer-pays model. Those opposed to the alternative compensation models cited concerns of replacing one set of conflicts of interest with another and the costs of implementation. A number of the letters either supported or made suggestions for improving existing SEC rules. A few comment letters also raised legal questions about the implementation or rulemaking for specific aspects of certain models. In addition to studying alternative compensation models, SEC has begun to implement a number of Dodd-Frank Act requirements pertaining to NRSROs. These requirements include additional rulemakings related to NRSROs’ disclosures of performance statistics, credit ratings methodologies, third-party due diligence for asset-backed securities, and analyst training and testing standards. Of nine rulemaking requirements, SEC has adopted three final rules that implement all or part of certain requirements and proposed rules for the remaining requirements. SEC also has been working to establish an Office of Credit Ratings as required by the act. Moreover, SEC examination staff completed the first cycle of annual examinations of each NRSRO as required by the Dodd-Frank Act and published their summary report in September 2011. As part of its study on alternative compensation models for NRSROs, SEC solicited comment on SEC’s authority to implement various alternative compensation models. According to SEC staff, they are reviewing the comment letters received and evaluating authority issues. Any recommendations for regulatory or statutory changes SEC determines should be made to implement the findings of the study are to be included in their report to Congress, due in July 2012. The model authors’ opinions of the extent to which statutory changes would be needed to implement their alternative compensation models vary, with one stating that current law provides SEC with the necessary authority and another anticipating the need for legislation. Given that NRSROs continue to primarily use the issuer-pays, and to a lesser extent, the subscriber-pays models, the use of any alternative model or models would likely have to be at the direction of SEC or Congress. However, the extent to which SEC’s existing authorities would allow it to implement any of the alternative models by rule largely will depend on the alternative model or models selected. Obtaining as complete information on the models as available, such as by consulting with the models’ authors, will be important for SEC to fully assess each model in order to make its decision and any recommendations for statutory changes SEC determines should be made to implement the findings of its section 939Fstudy. SEC should consult with the authors of the proposed models to obtain all available information as it considers the various alternative compensation models and any recommendations for statutory changes SEC determines should be made to implement the findings of its section 939F study. SEC agreed with the recommendation.
From April 24 through September 11, 2000, the U.S. Census Bureau surveyed a sample of about 314,000 housing units (about 1.4 million census and A.C.E. records in various areas of the country, including Puerto Rico) to estimate the number of people and housing units missed or counted more than once in the census and to evaluate the final census counts. Temporary bureau staff conducted the surveys by telephone and in-person visits. The A.C.E. sample consisted of about 12,000 “clusters” or geographic areas that each contained about 20 to 30 housing units. The bureau selected sample clusters to be representative of the nation as a whole, relying on variables such as state, race and ethnicity, owner or renter, as well as the size of each cluster and whether the cluster was on an American Indian reservation. The bureau canvassed the A.C.E. sample area, developed an address list, and collected response data for persons living in the sample area on Census Day (April 1, 2000). Although the bureau’s A.C.E. data and address list were collected and maintained separately from the bureau’s census work, A.C.E. processes were similar to those of the census. After the census and A.C.E. data collection operations were completed, the bureau attempted to match each person counted by A.C.E. to the list of persons counted by the census in the sample areas to determine the number of persons who lived in the sample area on Census Day. The results of the matching process, together with the characteristics of each person compared, provided the basis for statistical estimates of the number and characteristics of the population missed or improperly counted by the census. Correctly matching A.C.E. persons with census persons is important because errors in even a small percentage of records can significantly affect the undercount or overcount estimate. Matching over 1.4 million census and A.C.E. records was a complex and often labor-intensive process. Although several key matching tasks were automated and used prespecified decision rules, other tasks were carried out by trained bureau staff who used their judgment to match and code records. The four phases of the person matching process were (1) computer matching, (2) clerical matching, (3) nationwide field follow- up on records requiring more information, and (4) a second phase of clerical matching after field follow-up. Each subsequent phase used additional information and matching rules in an attempt to match records that the previous phase could not link. (first phase) (second phase) Computer matching took pairs of census and A.C.E. records and compared various personal characteristics such as name, age, and gender. The computer then calculated a match score for the paired records based on the extent to which the personal characteristics were aligned. Experienced bureau staff reviewed the lists of paired records, sorted by their match scores, and judgmentally assigned cutoff scores. The cutoff scores were break points used to categorize the paired records into one of three groups so that the records could be coded as a “match,” “possible match,” or one of a number of codes that defines them as not matched. Computer matching successfully assigned a match score to nearly 1 million of the more than 1.4 million records reviewed (about 66 percent). Bureau staff documented the cutoff scores for each of the match groups. However, they did not document the criteria or rules used to determine cutoff scores, the logic of how they applied them, and examples of their application . As a result, the bureau may not benefit from the possible lessons learned on how to apply cutoff scores. When the computer links few records as possible matches, clerks will spend more time searching records and linking them. In contrast, when the computer links many records as possible matches, clerks will spend less time searching for records to link and more time unlinking them. Without documentation and knowledge of the effect of cutoff scores on clerical matching productivity, future bureau staff will be less able to determine whether to set cutoff scores to link few or many records together as possible matches. (first phase) (second phase) During clerical matching, three levels of matchers—including over 200 clerks, about 40 technicians, and 10 experienced analysts or “expert matchers”—applied their expertise and judgment to manually match and code records. A computer software system managed the workflow of the clerical matching stages. The system also provided access to additional information, such as electronic images of census questionnaires that could assist matchers in applying criteria to match records. According to a bureau official, a benefit of clerical matching was that records of entire households could be reviewed together, rather than just individually as in computer matching. During this phase over a quarter million records (or about 19 percent) were assigned a final match code. The bureau taught clerks how to code records in situations in which the A.C.E. and census records differed because one record contained a nickname and the other contained the birth name. The bureau also taught clerks how to code records with abbreviations, spelling differences, middle names used as first names, and first and last names reversed. These criteria were well documented in both the bureau’s procedures and operations memorandums and clerical matchers’ training materials, but how the criteria were applied depended on the judgment of the matchers. The bureau trained clerks and technicians for this complex work using as examples some of the most challenging records from the 1998 Dress Rehearsal person matching operation. In addition, the analysts had extensive matching experience. For example, the 4 analysts that we interviewed had an average of 10 years of matching experience on other decennial census surveys and were directly involved in developing the training materials for the technicians and clerks. (first phase) (second phase) The bureau conducted a nationwide field follow-up on over 213,000 records (or about 15 percent) for which the bureau needed additional information before it could accurately assign a match code. For example, sometimes matchers needed additional information to verify that possibly matched records were actually records of the same person, that a housing unit was located in the sample area on Census Day, or that a person lived in the sample area on Census Day. Field follow-up questionnaires were printed at the National Processing Center and sent to the appropriate A.C.E. regional office. Field follow-up interviewers from the bureau’s regional offices were required to visit specified housing units and obtain information from a knowledgeable respondent. If the household member for the record in question still lived at the A.C.E. address at the time of the interview and was not available to be interviewed after six attempts, field follow-up interviewers were allowed to obtain information from one or more knowledgeable proxy respondents, such as a landlord or neighbor. (first phase) (second phase) The second phase of clerical matching used the information obtained during field follow-up in an attempt to assign a final match code to records. As in the first phase of clerical matching, the criteria used to match and code records were well documented in both the bureau’s procedures and operations memorandums and clerical matchers’ training materials. Nevertheless, in applying those criteria, clerical matchers had to use their own judgment and expertise. This was particularly true when matching records that contained incomplete and inconsistent information, as noted in the following examples. Different household members provided conflicting information. The census counted one person—the field follow-up respondent. A.C.E. recorded four persons—including the respondent and her daughter. The respondent, during field follow-up, reported that all four persons recorded by A.C.E. lived at the housing unit on Census Day. During the field follow-up interview, the respondent’s daughter came to the house and disagreed with the respondent. The interviewer changed the answers on the field follow-up questionnaire to reflect what the daughter said— the respondent was the only person living at the household address on Census Day. The other three people were coded as not living at the household address on Census Day. According to bureau staff, the daughter’s response seemed more reliable. An interviewer’s notes on the field follow-up questionnaire conflicted with recorded information. The census counted 13 people—including the respondent and 2 people not matched to A.C.E. records. A.C.E. recorded 12 people—including the respondent, 10 other matched people, and the respondent’s daughter who was not matched to census records. The field follow-up interview attempted to resolve the unmatched census and A.C.E. people. Answers to questions on the field follow-up questionnaire verified that the daughter lived at the housing address on Census Day. However, the interviewer’s notes indicated that the daughter and the respondent were living in a shelter on Census Day. The daughter was coded as not living at the household address on Census Day, while the respondent remained coded as matched and living at the household address on Census Day. According to bureau staff, the respondent should also have been coded as a person that did not live at the household address on Census Day, based on the notes on the field follow-up questionnaire. A.C.E., census, or both counted people at the wrong address. The census counted two people—the respondent and her husband—twice; once in an apartment and once in a business office that the husband worked in, both in the same apartment building. The A.C.E. did not record anyone at either location, as the residential apartment was not in the A.C.E. interview sample. The respondent, during field follow-up, reported that they lived at their apartment on Census Day and not at the business office. The couple had responded to the census on a questionnaire delivered to the business office. A census enumerator, following up on the “nonresponse” from the couple’s apartment, had obtained census information from a neighbor about the couple. The couple, as recorded by the census at the business office address, was coded as correctly counted in the census. The couple, as recorded by the census at the apartment address, was coded as living outside the sample block. According to bureau staff, the couple recorded at the business office address were correctly coded, but the couple recorded at the apartment should have been coded as duplicates. An uncooperative household respondent provided partial or no information. The census counted a family of four—the respondent, his wife, and two daughters. A.C.E. recorded a family of three—the same husband and wife, but a different daughter’s name, “Buffy.” The field follow-up interview covered the unmatched daughters—two from census and one from A.C.E. The respondent confirmed that the four people counted by the census were his family and that “Buffy” was a nickname for one of his two daughters, but he would not identify which one. The interviewer wrote in the notes that the respondent “was upset with the number of visits” to his house. “Buffy” was coded as a match to one of the daughters; the other daughter was coded as counted in the census but missed by A.C.E. According to bureau staff, since the respondent confirmed that “Buffy” was a match for one of his daughters—although not which one—and that four people lived at the household address on Census Day, they did not want one of the daughters coded so that she was possibly counted as a missed census person. Since each record had to have a code identifying whether it was a match by the end of the second clerical matching phase, records that did not contain enough information after field follow-up to be assigned any other code were coded as “unresolved.” The bureau later imputed the match code results for these records using statistical methods. While imputation for some situations may be unavoidable, it introduces uncertainty into estimates of census over- or undercount rates. The following are examples of situations that resulted in records coded as “unresolved.” Conflicting information was provided for the same household. The census counted four people—a woman, an “unmarried partner,” and two children. A.C.E. recorded three people—the same woman and two children. During field follow-up, the woman reported to the field follow- up interviewer that the “unmarried partner” did not really live at the household address, but just came around to baby-sit, and that she did not know where he lived on Census Day. According to bureau staff, probing questions during field follow-up determined that the “unmarried partner” should not have been coded as living at the housing unit on Census Day. Therefore, the “unmarried partner” was coded as “unresolved.” A proxy respondent provided conflicting or inaccurate information. The census counted one person—a female renter. A.C.E. did not record anyone. The apartment building manager, who was interviewed during field follow-up, reported that the woman had moved out of the household address sometime in February 2000, but the manager did not know the woman’s Census Day address. The same manager had responded to an enumerator questionnaire for the census in June 2000 and had reported that the woman did live at the household address on Census Day. The woman was coded as “unresolved.” The bureau employed a series of quality assurance procedures for each phase of person matching. The bureau reported that person matching quality assurance was successful at minimizing errors because the quality assurance procedures found error rates of less than 1 percent. Clerks were to review all of the match results to ensure, among other things, that the records linked by the computer were not duplicates and contained valid and complete names. Moreover, according to bureau officials, the software used to link records had proven itself during a similar operation conducted for the 1990 Census . The bureau did not report separately on the quality of computer matched records. Although there were no formal quality assurance results from computer matching, at our request the bureau tabulated the number of records that the computer had coded as “matched” that had subsequently been coded otherwise. According to the bureau, the subsequent matching process resulted in a different match code for about 0.6 percent of the almost 500,000 records initially coded as matched by the computer. Of those records having their codes changed by later matching phases, over half were eventually coded as duplicates and almost all of the remainder were rematched to someone else. Technicians reviewed the work of clerks and analysts reviewed the work of technicians primarily to find clerical errors that (1) would have prevented records from being sent to field follow-up, (2) could cause a record to be incorrectly coded as either properly or erroneously counted by the census, or (3) would cause a record to be incorrectly removed from the A.C.E. sample. Analysts’ work was not reviewed. Clerks and technicians with error rates of less than 4 percent had a random sample of about 25 percent of their work reviewed, while clerks and technicians exceeding the error threshold had 100 percent of their work reviewed. About 98 percent of clerks in the first phase of matching had only a sample of their work reviewed. According to bureau data, less than 1 percent of match decisions were revised during quality assurance reviews, leading the bureau to conclude that clerical matching quality assurance was successful. Under certain circumstances, technicians and analysts performed additional reviews of clerks’ and technicians’ work. For example, if during the first phase of clerical matching a technician had reviewed and changed more than half of a clerk’s match codes in a given geographic cluster, the cluster was flagged for an analyst to review all of the clerk and technician coding for that area. During the second phase, analysts were required to make similar reviews when only one of the records was flagged for their review. This is one of the reasons why, as illustrated in figure 2, these additional reviews were a much more substantial part of the clerks’ and technicians’ workload that was subsequently reviewed by more senior matchers. The total percentage of workload reviewed ranged from about 20 to 60 percent across phases of clerical matching, far in excess of the 11- percent quality assurance level for the bureau’s person interviewing operation. The quality assurance plan for the field follow-up phase had two general purposes: (1) to ensure that questionnaires had been completed properly and legibly and (2) to detect falsification. Supervisors initially reviewed each questionnaire for legibility and completeness. These reviews also checked the responses for consistency. Office staff were to conduct similar reviews of each questionnaire. To detect falsification, the bureau was to review and edit each questionnaire at least twice and recontact a random sample of 5 percent of the respondents. As shown in figure 3, all 12 of the A.C.E. regional offices exceeded the 5 percent requirement by selecting more than 7 percent of their workload for quality assurance review, and the national rate of quality assurance review was about 10 percent. At the local level, however, there was greater variation. There are many reasons why the quality assurance coverage can appear to vary locally. For example, a local census area could have a low quality assurance coverage rate because interviewers in that area had their work reviewed in other areas, or the area could have had an extremely small field follow-up workload, making the difference of just one quality assurance questionnaire constitute a large percentage of the local workload. Seventeen local census office areas (out of 520 nationally, including Puerto Rico) had 20 percent or more of field follow-up interviews covered by the quality assurance program, and, at the other extreme, 5 local census areas had 5 percent or less of the work covered by the quality assurance program. Less than 1 percent of the randomly selected questionnaires failed quality assurance nationally, leading the bureau to report this quality assurance operation as successful. When recontacting respondents to detect falsification by interviewers, quality assurance supervisors were to determine whether the household had been contacted by an interviewer, and if it had not, the record of that household failed quality assurance. According to bureau data, about 0.8 percent of the randomly selected quality assurance questionnaires failed quality assurance nationally. This percentage varied between 0 and about 3 percent across regions. The bureau carried out person matching as planned, with only a few procedural deviations. Although the bureau took action to address these deviations, it has not determined how matching results were affected. As shown in table 1, these deviations included (1) census files that were delivered late, (2) a programming error in the clerical matching software, (3) printing errors in field follow-up forms, (4) regional offices that sent back incomplete questionnaires, and (5) the need for additional time to complete the second phase of clerical matching. It is unknown what, if any, cumulative effect these procedural deviations may have had on the quality of matching for these records or on the resultant A.C.E. estimates of census undercounts. However, bureau officials believe that the effect of the deviations was small based on the timely responses taken to address them. The bureau conducted reinterviewing and re-matching studies on samples of the 2000 A.C.E. sample and concluded that matching quality in 2000 was improved over that in 1990, but that error introduced during matching operations remained and contributed to an overstatement of A.C.E. estimates of the census undercounts. The studies provided some categorical descriptions of the types of matching errors measured, but did not identify the procedural causes, if any, for those errors. Furthermore, despite the improvement in matching reported by the bureau, A.C.E. results were not used to adjust the census due to these errors as well as other remaining uncertainties. The bureau has reported that additional review and analysis on these remaining uncertainties would be necessary before any potential uses of these data can be considered. The computer matching phase started 3 days later than scheduled and finished 1 day late due to the delayed delivery of census files. In response, bureau employees who conducted computer matching worked overtime hours to make up lost time. Furthermore, A.C.E. regional offices did not receive clusters in the prioritized order that they had requested. The reason for prioritizing the clusters was to provide as much time as possible for field follow-up on clusters in the most difficult areas. Examples of areas that were expected to need extra time were those with staffing difficulties, larger workloads, or expected weather problems. Based on the bureau’s Master Activities Schedule, the delay did not affect the schedule of subsequent matching phases. Also, bureau officials stated that although clusters were not received in prioritized order, field follow-up was not greatly affected because the first clerical matching phase was well staffed and sent the work to regional offices quickly. On the first full day of clerical matching, the bureau identified a programming error in the quality assurance management system, which made some clerks and technicians who had not passed quality assurance reviews appear to have passed. In response, bureau officials manually overrode the system. Bureau officials said the programming error was fixed within a couple of days, but could not explain how the programming error occurred. They stated that the software system used for clerical matching was thoroughly tested, although it was not used in any prior censuses or census tests, including the Dress Rehearsal. As we have previously noted, programming errors that occur during the operation of a system raise questions about the development and acquisition processes used for that system. A programming error caused last names to be printed improperly on field follow-up forms for some households containing multiple last names. In situations in which regional office staff may not have caught the printing error and interviewers may have been unaware of the error—such as when those questionnaires were completed before the problem was discovered— interviews may have been conducted using the wrong last name, thus recording misleading information. According to bureau officials, in response, the bureau (1) stopped printing questionnaires on the date officials were notified about the misprinted questionnaires, (2) provided information to regional offices that listed all field follow-up housing units with multiple names that had been printed prior to the date the problem was resolved, and (3) developed procedures for clerical matchers to address any affected questionnaires being returned that had not been corrected by regional office staff. While resolving the problem, productivity was initially slowed in the A.C.E. regional offices for approximately 1 to 4 days, yet field follow-up was completed on time. Bureau officials inadvertently introduced this error when they addressed a separate programming problem in the software. Bureau officials stated that they tested this software system; however, the system was not given a trial run during the Census Dress Rehearsal in 1998. According to bureau officials, the problem did not affect data quality because it was caught early in the operation and follow-up forms were edited by regional staff. However, the bureau could not determine the exact day of printing for each questionnaire and thus did not know exactly which households had been affected by the problem. According to bureau data, the problem could have potentially affected over 56,000 persons, or about 5 percent of the A.C.E. sample. In addition to the problem printing last names, the bureau experienced other printing problems. According to bureau staff, field follow-up received printed questionnaires that were (1) missing pages, (2) missing reference notes written by clerical matchers, and (3) missing names and/or having some names printed more than once for some households of about nine or more people. According to bureau officials, these problems were not resolved during the operation because they were reported after field follow-up had started and the bureau was constrained by deadlines. Bureau officials stated that they believed that these problems would not significantly affect the quality of data collected or match code results, although bureau officials were unable to provide data that would document either the extent, effect, or cause of these problems. The bureau’s regional offices submitted questionnaires containing an incomplete “geocoding” section. This section was to be used in instances when the bureau needed to verify whether a housing unit (1) existed on Census Day and (2) was correctly located in the A.C.E. sample area. Although the bureau returned 48 questionnaires during the first 6 days of the operation to the regional offices for completion, bureau officials stated that after that they no longer returned questionnaires to the regional offices because they did not want to delay the completion of field follow-up. A total of over 10,000 questionnaires with “geocoding” sections were initially sent to the regional offices. The bureau did not have data on the number, if any, of questionnaires that the regional offices submitted incomplete beyond the initial 48. The bureau would have coded as “unresolved” the persons covered by any incomplete questionnaires. As previously stated, the bureau later imputed the match code results for these records using statistical methods, which could introduce uncertainty into estimates of census over- or undercount rates. According to bureau officials, this problem was caused by (1) not printing a checklist of all sections that needed to be completed by interviewers, (2) no link from any other section of the questionnaire to refer interviewers to the “geocoding” section, and (3) field supervisors following the same instructions as interviewers to complete their reviews of field follow-up forms. However, bureau officials believed that the mistake should have been caught by regional office reviews before the questionnaires were sent back for processing. About a week after the second clerical matching phase began, officials requested an extension, which was granted for 5 days, to complete the second clerical matching phase. According to bureau officials, the operation could have been completed by the November 30, 2000, deadline as planned, but they decided to take extra steps to improve data quality that required additional time. According to bureau officials, the delay in completing person matching had no effect on the final completion schedule, only the start of subsequent A.C.E. processing operations. Matching A.C.E. and census records was an inherently complex and labor- intensive process that often relied on the judgment of trained staff, and the bureau prepared itself accordingly. For example, the bureau provided extensive training for its clerical matchers, generally provided thorough documentation of the process and criteria to be used in carrying out their work, and developed quality assurance procedures to cover its critical matching operations. As a result, our review identified few significant operational or procedural deviations from what the bureau planned, and the bureau took timely action to address them. Nevertheless, our work identified opportunities for improvement. These opportunities include a lack of written documentation showing how cutoff scores were determined and programming errors in the clerical matching software and software used to print field follow-up forms. Without written documentation, the bureau will be less likely to capture lessons learned on how cutoff scores should be applied, in order to determine the impact on clerical matching productivity. Moreover, the discovery of programming errors so late in the operation raises questions about the development and acquisition processes used for the affected A.C.E. computer systems. In addition, one lapse in procedures may have resulted in incomplete geocoding sections verifying that the person being matched was in the geographic sample area. The collective effect that these deviations may have had on the accuracy of A.C.E. results is unknown. Although the bureau has concluded that A.C.E. matching quality improved compared to 1990, the bureau has reported that error introduced during matching operations remained and contributed to an overstatement of the A.C.E. estimate of census undercounts. To the extent that the bureau employs an operation similar to A.C.E. to measure the quality of the 2010 Census, it will be important for the bureau to determine the impact of the deviations and explore operational improvements, in addition to the research it might carry out on other uncertainties in the A.C.E. results. As the bureau documents its lessons learned from the 2000 Census and continues its planning efforts for 2010, we recommend that the secretary of commerce direct the bureau to take the following actions: 1. Document the criteria and the logic that bureau staff used during computer matching to determine the cutoff scores for matched, possibly matched, and unmatched record pairs. 2. Examine the bureau’s system development and acquisition processes to determine why the problems with A.C.E. computer systems were not discovered prior to deployment of these systems. 3. Determine the effect that the printing problems may have had on the quality of data collected for affected records, and thus the accuracy of A.C.E. estimates of the population. 4. Determine the effect that the incomplete geocoding section of the questionnaires may have had on the quality of data collected for affected records, and thus the accuracy of A.C.E. estimates of census undercounts. The secretary of commerce forwarded written comments from the U.S. Census Bureau on a draft of this report. (See appendix II.) The bureau had no comments on the text of the report and agreed with, and is taking action on, two of our four recommendations. In responding to our recommendation to document the criteria and the logic that bureau staff used during computer matching to determine cutoff scores, the bureau acknowledged that such documentation may be informative and that such documentation is under preparation. We look forward to reviewing the documentation when it is complete. In responding to our recommendation to examine system development and acquisition processes to determine why problems with the A.C.E. computer systems were not discovered prior to deployment, the bureau responded that despite extensive testing of A.C.E. computer systems, a few problems may remain undetected. The bureau plans to review the process to avoid such problems in 2010, and we look forward to reviewing the results of their review. Finally, in response to our two recommendations to determine the effects that printing problems and incomplete questionnaires had on the quality of data collected and the accuracy of A.C.E. estimates, the bureau responded that it did not track the occurrence of these problems because the effects on the coding process and accuracy were considered to be minimal since all problems were identified early and corrective procedures were effectively implemented. In our draft report we recognized that the bureau took timely corrective action in response to these and other problems that arose during person matching. Yet we also reported that bureau studies of the 2000 matching process had concluded that matching error contributed to error in A.C.E. estimates without identifying procedural causes, if any. Again, to the extent that the bureau employs an operation similar to A.C.E. to measure the quality of the 2010 Census, it will be important for the bureau to determine the impact of the problems and explore operational improvements as we recommend. We are sending copies of this report to other interested congressional committees. Please contact me on (202) 512-6806 if you have any questions. Key contributors to this report are included in appendix III. To address our three objectives, we examined relevant bureau program specifications, training manuals, office manuals, memorandums, and other progress and research documents. We also interviewed bureau officials at bureau headquarters in Suitland, Md., and the bureau’s National Processing Center in Jeffersonville, Ind., which was responsible for the planning and implementation of the person matching operation. In addition, to review the process and criteria involved in making an A.C.E. and census person match, we observed the match clerk training at the National Processing Center and a field follow-up interviewer training session in Dallas, Tex. To identify the results of the quality assurance procedures used in key person matching phases, we analyzed operational data and reports provided to us by the bureau, as well as extracts from the bureau's management information system, which tracked the progress of quality assurance procedures. Other independent sources of the data were not available for us to use to test the data that we extracted, although we were able to corroborate data results with subsequent interviews of key staff. Finally, to examine how, if at all, the matching operation deviated from what was planned, we selected 11 locations in 7 of the 12 bureau census regions (Atlanta, Chicago, Dallas, Denver, Los Angeles, New York, and Seattle). At each location we interviewed A.C.E. workers from November through December 2000. The locations selected for field visits were chosen primarily for their geographic dispersion (i.e., urban or rural), variation in type of enumeration area (e.g., update/leave or list enumerate), and the progress of their field follow-up work. In addition, we reviewed the match code results and field follow-up questionnaires from 48 sample clusters. These clusters were chosen because they corresponded to the local census areas we visited and contained records reviewed during every phase of the person matching operation. The results of our field visits and our cluster review are not generalizable nationally to the person matching operation. We performed our audit work from September 2000 through September 2001 in accordance with generally accepted government auditing standards. In addition to those named above, Ty Mitchell, Lynn Wasielewski, Steven Boyles, Angela Pun, J. Christopher Mihm, and Richard Hung contributed to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily e-mail alert for newly released products” under the GAO Reports heading. Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: fraudnet@gao.gov, or 1-800-424-5454 or (202) 512-7470 (automated answering system).
The U.S. Census Bureau conducted the Accuracy and Coverage Evaluation (ACE) survey to estimate the number of people missed, counted more than once, or otherwise improperly counted in the 2000 Census. On the basis of uncertainty in the ACE results, the Bureau's acting director decided that the 2000 Census tabulations should not be adjusted in order to redraw the boundaries of congressional districts or to distribute billions of dollars in federal funding. Although ACE was generally implemented as planned, the Bureau found that it overstated census undercounts because of an error introduced during matching operations and other uncertainties. The Bureau concluded that additional review and analysis of these uncertainties would be needed before the data could be used. Matching more than 1.4 million census and ACE records involved the following four phases, each with its own matching procedures and multiple layers of review: computer matching, clerical matching, field follow-up, and clerical matching. The Bureau applied quality assurance procedures to each phase of person matching. Because the quality assurance procedures had failure rates of less than one percent, the Bureau reported that person matching quality assurance was successful at minimizing errors. Overall, the Bureau carried out person matching as planned, with few procedural deviations. GAO identified areas for improving future ACE efforts, including more complete documentation of computer matching decisions and better assurance that problems do not arise with the bureau's automated systems.
As of June 2017, BOP was responsible for approximately 188,000 inmates in federal custody. About 81 percent, or approximately 154,000 inmates, are housed in BOP-operated institutions. There are 122 BOP- operated institutions, with some of the institutions clustered in a Federal Correctional Complex (FCC). BOP designates 7 of its 122 institutions as medical referral centers, generally called a Federal Medical Center (FMC), to provide advanced care for inmates with more serious chronic or acute medical conditions. Each BOP institution (or complex) is located within one of six regions, with an office overseeing the region (see fig. 1). BOP also has a Central Office, located in Washington, D.C., that oversees the six regions. BOP is responsible for providing medically necessary medical, dental, and mental health services in a manner consistent with standards of care for the non-prison community. BOP’s Health Services Division oversees the provision of medical, dental, and psychiatric services. BOP’s Psychology Services Branch, under its Reentry Services Division, is responsible for providing psychology services, including psychology treatment programs and drug abuse treatment programs. BOP provides most medical and dental care inside its institutions (inside care), usually with BOP-employed medical staff. The level and kinds of services provided depends upon the care level of the institution, as described below. Each BOP institution operates a health services unit. Most units have examination rooms, treatment rooms, dental clinics, radiology and laboratory areas, a pharmacy, and administrative offices, as can be seen in figure 2 below. BOP staffs these health units with medical professionals including physicians, dentists, nurses, pharmacists, and mid-level practitioners. Inside care services include: Health screening upon inmates’ admission to the prison, comprehensive documentation of inmates’ medical history, and physical exams to identify underlying infectious, chronic, and behavioral health needs. Sick call triage and episodic visits to assess, diagnose, and treat short-term health problems. Preventative health visits to screen for underlying chronic conditions and immunize against transmission of preventable infectious diseases. Chronic care clinics to manage long-term diseases, such as diabetes, asthma, and congestive heart failure. Rehabilitative care to regain or maintain optimal physical and mental health function. Oral health care to assess, diagnose, treat, and prevent dental cavities and oral diseases. When BOP is unable to provide a medical service to an inmate, BOP transports the inmate to a medical facility or provider in the community (outside care). Generally, each BOP institution has its own outside care contract that sets payment rates for services provided with the contracted community medical centers and providers. Further, apart from some national BOP contracts that standardize goods and services, each BOP institution acquires its own health care goods and services. Institution- acquired goods and services vary, and include contracted health care professionals, medical imaging services, such as ultrasound and magnetic resonance imaging, medical equipment, and medical waste disposal. Beginning the process in 2004, BOP instituted a medical and mental health care level system for its inmates and its institutions. BOP designates inmates as a care level 1, 2, 3, or 4, depending on the level of medical and mental health services required. Inmates designated as a care level 1 are generally considered healthy, and the intensity of care required increases along with care level (see table 1). BOP also classifies its institutions as a care level 1, 2, 3, or 4, depending on the level of medical and mental health services provided (see app. II for a complete list of institutions with their associated medical care levels). As of 2017, BOP has seven care level 4 institutions—FMCs—that offer advanced care, such as dialysis, oncology treatment, limited surgery services, prosthetics, inpatient and forensic mental health, dementia care, and end-of-life care (see app. III for more information on the FMCs). BOP Electronic Medical Records System BOP uses an electronic medical records system—the Bureau Electronic Medical Record (BEMR) system—to keep track of an inmate’s medical, social, and psychological history. It includes information on an inmate’s clinical encounters (for both inside care and outside care) and medication, among other things. According to BOP officials, BEMR differs from typical electronic medical records systems used outside of prison systems, which generally tie a diagnostic code to a reimbursement rate. BOP’s system differs because it was designed as a record of clinical care, not a record of managing reimbursements through private insurance companies, Medicare, or Medicaid. Health Care Planning and Oversight BOP plans and oversees its provision of health care through various mechanisms; the following are four major mechanisms that effect health care cost control: HSD Executive Staff: The HSD Assistant Director, Senior Deputy Assistant Director, and Medical Director oversee the programs, operations, and delivery of health care for BOP institutions. They direct the HSD Branch Chiefs and Chief Professional Officers charged with managing national health programs and services. According to BOP officials, HSD Executive Staff are the most integral planning mechanism and the final decision-makers on any HSD plan. HSD Governing Board: BOP established this multi-divisional and multi-regional governance structure in 2005 to provide executive level strategic planning and performance evaluation of health services management and operations. Generally, the Board is responsible for overseeing the planning, organization, delivery, and evaluation of health services provided within BOP institutions by BOP staff and contractors. The Board is also tasked with working with HSD to ensure that medically necessary health care is delivered in the most cost-efficient way. The BOP-Wide Annual Strategic Plan: BOP develops an annual strategic plan to help fulfill its mission and achieve strategic goals. BOP highlights cost-efficiency in several parts of its annual strategic plan, including its mission and vision statements. BOP’s mission statement states that BOP institutions will be cost efficient and BOP’s vision statement states that it aims to be the best value provider of efficient correctional services and programs. BOP developed strategic objectives, including a strategic objective focused on health care efficiencies, which BOP explains as “maximizing health care resources as a cost-containment strategy by applying evidence-based business practices and measuring performance through the use of appropriate industry-wide metrics.” The HSD Integrated Strategic Plan for 2015 through 2019: HSD also has an internal five-year strategic plan for the division that focuses on HSD initiatives that are intended to bring large-scale change to BOP’s health care system. HSD established four overarching focus areas in the plan: (1) administration and program management, (2) health services staffing and management, (3) financial management, and (4) risk management. The HSD integrated strategic plan includes a description, implementation strategies, and expected outcomes for all four focus areas for each of the 17 HSD branches and sections listed in the plan. During the 8-year period starting in fiscal year 2009 and ending in fiscal year 2016, BOP obligated more than $9 billion for the provision of inmate health care. According to BOP data, annual obligations increased from a total of almost $978 million in fiscal year 2009 to more than $1.3 billion in fiscal year 2016, an increase of about 37 percent overall during this period (see table 2). More specifically, annual obligations for medical services increased by about 37 percent, psychology services by about 39 percent, and drug abuse treatment programs by almost 44 percent. As shown in table 2, medical services obligations have been increasing over time. Psychology services and drug abuse treatment programs obligations have also been increasing over time. According to BOP officials, more recent increases in psychology services obligations can be attributed to an effort to fill vacancies in psychology services positions throughout BOP institutions. BOP also issued a new program statement in 2014 on the treatment and care of inmates with mental illness, which increased the treatment requirements and necessitated hiring more staff, according to BOP officials. BOP officials also explained that more recent increases in drug abuse treatment program obligations were due to BOP adding 18 additional Residential Drug Abuse Programs (RDAP), including some specialized RDAP, beginning in fiscal year 2013. For example, BOP added four RDAPs in high security institutions, and three Spanish- language RDAPs. Sex Offender Management Programs and medical staff training remained fairly constant during this time. To account for any possible increases in health care obligations as a result of changes in the inmate population, we estimated the annual per capita, or per inmate obligations, by dividing the total health care obligations by the number of inmates—and this figure also increased over time, as can be seen in table 2. After adjusting for inflation, per capita health care obligations increased from $6,334 per inmate in fiscal year 2009 to $8,602 per inmate in fiscal year 2016, or an increase of about 36 percent during this time period. As table 2 shows, most of the growth in inflation adjusted per capita obligations occurred in the last four years of fiscal years 2013 through 2016. Of the five categories we list for total health care obligations (medical services, psychology services, drug treatment programs, Sex Offender Management Programs, and medical staff training) medical services comprised the largest amount, about 88 percent. BOP-reported medical services obligations include several categories of expenditures, which we separated into outside medical services and inside medical services, the latter of which we grouped into five major categories. We analyzed these categories and found that for fiscal year 2016, about 37 percent were for medical staff labor costs inside BOP institutions and 39 percent were for outside medical services, as illustrated in figure 3. Outside medical services costs include costs to treat inmates at private physicians’ offices or at hospitals, as well as transportation costs. These costs include security related costs, including overtime costs for correctional officers to transport the inmates to those locations, as well as for their time guarding inmates, based on inmates’ custody levels, when inmates are hospitalized. According to BOP, in fiscal years 2015 and 2016, security related costs made up 19 percent of outside medical services’ costs. At the same time that BOP’s health care obligations have been increasing, total United States health care expenditures has also been increasing. According to the Department of Health and Human Services National Health Expenditure Accounts (NHEA) data, total health care expenditures in the United States increased by 5.8 percent in fiscal year 2015, and reached a per capita expenditure of $9,990. However, as we show in table 3, growth in expenditures, while increasing every year, has not increased consistently, with the last three fiscal years of 2014 through 2016 increasing at a higher rate. Table 3 also shows the per capita expenditures, which have increased every year over the 8-year time period. National health expenditures are not directly comparable to BOP’s costs because the inmate population is predominantly adult and male, unlike the overall U.S. population. Nevertheless both national health expenditures and BOP health care costs have risen at a higher rate in the last three years. We spoke with numerous BOP officials at the institutional, regional, and Central Office levels to discuss the factors that affected inmate health care costs. Officials frequently cited the following as major factors: inmates entering with relatively poorer health, aging inmates, rising pharmaceutical prices, and outside medical services. Inmates entering with relatively poorer health—BOP officials stated that inmates are a unique population that poses health care challenges. For example, officials stated that inmates come into the system with more acute needs from limited access to health care or they have engaged in risky behaviors, such as substance abuse. Inmates also tend to have higher rates of infectious diseases and chronic conditions that can persist throughout incarceration. According to the Council of State Governments Justice Center, rates of mental illness, substance use disorders, infectious disease, and chronic health conditions are as much as seven times higher for inmates than rates in the general population. Aging inmates—BOP officials stated that an aging inmate population affects health care costs. In a 2015 report, the DOJ OIG also found that aging inmates are more costly to incarcerate than younger inmates due to increased medical needs. BOP data show that the average age of inmates has increased from fiscal year 2009 through 2016, as has the percentage of inmates aged 55 years or older. As seen in table 4, the percentage of inmates aged 55 years or older increased from 8.4 percent of the population in fiscal year 2009 to 12.0 percent in fiscal year 2016, which is an increase of about 44 percent during this time period. According to BOP officials, and the 2015 DOJ OIG report, increasing numbers of aging inmates are due to (1) inmates entering the system for the first time at older ages, and (2) inmates aging over time while incarcerated and serving long sentences. Rising Pharmaceutical Prices—According to BOP officials, and data we reviewed, expenditures for pharmaceuticals have risen from fiscal years 2009 through 2016. As table 5 shows, BOP’s total pharmaceutical expenditures have increased from $61.4 million in fiscal year 2009 to $111.7 million in fiscal year 2016, or an increase of about 82 percent. Accounting for changes in inmate population over this time period, we found that per capita expenditures have also increased, even in years when population fell. Adjusting for inflation, and using 2016 dollars, we found that overall per capita expenditures increased from $443 per inmate in fiscal year 2009 to $715 in fiscal year 2016, or an increase of about 61 percent. According to BOP officials, new advances in certain medications—in particular hepatitis C medication, HIV/AIDS medication, and biologics to treat cancers—have contributed to this increase. For example, the Food and Drug Administration (FDA) approved two new medications in a new class of hepatitis C drugs in May 2011, which significantly increased the cost of treating hepatitis C-infected inmates. As we show in table 6, BOP’s hepatitis C medication expenditures increased by almost 132 percent from fiscal year 2011 to 2012. FDA approved additional new medications at the end of 2013 and in 2014 that essentially cure hepatitis C, but that also drove up BOP’s treatment costs. As these latest medications became the standard of care for treating hepatitis C, BOP was therefore required to use them. Table 6 also shows that BOP experienced an increase in hepatitis C medication expenditures of about 136 percent from fiscal year 2014 to 2015. Overall, BOP experienced an increase of about 427 percent for hepatitis C medication during the 8-year time period. According to BOP officials, the average cost to treat one inmate with the new medication ranges from $30,000 to $60,000. BOP treated 240 inmates in fiscal year 2015 and 327 inmates in fiscal year 2016 with the new medications. Additionally, according to BOP’s 2018 Congressional Budget Justification, BOP estimated that there were approximately 20,000 inmates infected with hepatitis C, most of whom had not been treated. Table 6 also shows increases in expenditures for medication to treat cancer and HIV/AIDS. Overall, expenditures for cancer medication increased by about 315 percent during the 8 year time period, and by about 87 percent for HIV/AIDS medication. To ensure appropriate management of the hepatitis C infected inmate population, BOP developed treatment criteria, consistent with the American Association for the Study of Liver Diseases guidelines, to expediently identify and treat inmates with the highest medical need. In addition, the BOP Chief Pharmacist and other officials told us that they seek opportunities to acquire voluntary price reductions below the statutory Federal Supply Schedule pricing for hepatitis C medication from drug manufacturers. BOP data show that while pharmaceutical expenditures have increased overall, pharmaceutical expenditures for certain categories have varied, as can be seen in table 6 and figure 4. Expenditures have increased for hepatitis C, cancer, and HIV/AIDS medication, but decreased for psychotropic medications, which are generally used to treat mental health conditions. According to BOP officials, the availability of generic equivalents has helped decrease costs of psychotropic medication. Our prior work has also shown that generic drug prices have fallen overall since 2010, despite some extraordinary price increases in some generic medications. Outside Medical Services—BOP officials told us that costs for outside medical services have risen, which is another factor that is likely affecting their total obligations. According to BOP data, the total amount obligated increased from $322.6 million in fiscal year 2009 to $456.7 million in fiscal year 2016 (see table 7), which is an increase of about 45 percent over this period. As previously discussed, BOP incurs costs for outside care when BOP staff take inmates outside for specialty care that cannot be provided inside the institution, or for acute or emergency care, which could potentially become a catastrophic case. We asked BOP for the catastrophic care cost data for the 6 regions, for fiscal years 2014 and 2015. BOP was able to provide us the data for five of six regions for fiscal year 2015, and provided incomplete data for 2014 for most regions. When we asked one region why the data were incomplete, we were told that there was no requirement to collect such data. Although these data are incomplete, they show that BOP estimated costs of at least $100 million in fiscal year 2015 for catastrophic care in 5 of its 6 regions, which represents about 22 percent of outside medical services obligations for 2015. In its Congressional Budget Justifications from fiscal years 2009 through 2016, BOP reported that it had developed a process for monitoring and tracking catastrophic care costs; however, BOP officials acknowledged that their efforts to date have not been successful. During the course of our review, BOP designed a data collection instrument, which BOP officials stated they distributed to the regional offices, in order to more uniformly collect catastrophic care data moving forward. Outside medical services made up about 40 percent of medical services obligations annually from fiscal years 2009 through 2016. According to BOP officials, BOP has difficulties attracting sufficient medical staff to care for inmates in-house for institutions in remote locations, and therefore relies on outside care to a greater extent in these locations. We have also reported that BOP finds it particularly challenging to hire medical staff for institutions in rural locations because of the institutions’ location and low pay in these areas. In addition to the challenges posed by rural locations, some BOP institutions also have fewer community health care resources in proximity. As shown in figure 5, we found that of the 98 BOP institutions, 64 of them have five or fewer hospitals within a 20 mile radius, and 9 had no hospitals within a 20 mile radius. BOP officials also noted that the existence of a hospital in proximity to a BOP institution does not guarantee the hospital is willing to contract with BOP to serve an inmate population. A DOJ OIG report found that in one BOP complex, a decline in staffing from fiscal year 2010 to fiscal year 2014 corresponded with an increase in outside medical services costs of 47 percent during the same period. According to BOP officials, this is the primary reason BOP instituted its medical care level classification system, so the level of inmate health needs are matched to locations where community health care resources are sufficient. BOP lacks data on the health care services it provides to inmates, which is otherwise known as health care utilization data. BOP officials explained that existing data systems, such as the Financial Management Information System (FMIS), the accounting system of record, and BEMR, are not capable of collecting health care utilization data. Specifically, BOP cannot collect any health care utilization data from FMIS, because it is a DOJ-wide system and was not designed to allow DOJ components, such as BOP, to customize it. For example, while FMIS tracks expenditures on categories such as salaries and supplies, it cannot be customized to analyze health care data. BOP officials also said they cannot collect financial data from BEMR because BEMR was not designed to collect such data. BOP officials acknowledged that health care utilization data is important to understand and control health care costs, but recognized that BOP does not have data on its own use of resources. Officials also told us that BOP’s lack of health care utilization data has stalled its implementation of one health care cost control opportunity identified in 2012 that would leverage its volume purchasing power while contracting for medical services. Specifically, BOP is planning to pilot a regional comprehensive medical services contract. However, according to BOP officials, they need to provide information to the potential contractors such as how utilization rates compare across institutions within the region, and across the different regions, in order for potential contractors to build proposals. Because BOP is not positioned to provide utilization data, it cannot move forward with its plans. BOP officials also stated that a lack of data on its utilization of services stifles their ability to evaluate various health care cost control efforts. Given these limitations, BOP has explored and identified some solutions since 2009 to collect data on its utilization of health care services but has not yet determined how to obtain health care utilization data. Examples include the following: In fiscal year 2009, BOP began utilizing a medical claims adjudication services contract through which BOP sought, in part, to gather data on the utilization of health services outside its institutions. While BOP officials stated they intended to eventually cover all BOP institutions under this contract, as of February 2017, only 23 BOP institutions have used the service. BOP officials explained that BOP did not add more institutions to the initial medical claims adjudication contract because the contract ended before it could do so. BOP officials could not explain why more institutions had not utilized the services in the five years preceding the end of the contract, but some regional officials we spoke with, and officials at some of the institutions we visited, said the medical claims adjudication services provided were not cost effective. Senior BOP officials disagreed with the regional officials’ statement and said that the services were necessary to verify that medical services invoices were correct. BOP created a solicitation for a new contract in February 2016, and, as of February 2017, BOP was evaluating offers. According to officials, BOP is reviewing several proposals to contract for medical claims adjudication services for all BOP-managed institutions and expects to award the contract in early spring 2017. In fiscal year 2016, BOP contracted for a study to consider various options to enhance its ability to collect and analyze data through BEMR, including data on utilization of health care services inside and outside BOP’s institutions. The study considered three options— keeping the existing system as is, enhancing the existing system, or replacing the system entirely. The study included costs and recommendations for BOP to consider. As of February 2017, BOP officials indicated that they are still exploring these options. In fiscal year 2016, to provide potential vendors with utilization data to pilot the regional comprehensive health care contract, as described earlier, BOP began searching for computer software to convert its paper claims for outside medical services into electronic format files. Such files would include the inmate’s description, the condition for treatment, the service provided, and the cost of treatment. As of February 2017, BOP officials had not yet identified any cost-effective solutions through which to convert paper claims but said they continue to pursue this option. In fiscal year 2016, BOP officials also attempted to devise a method to collect health care utilization data from each of its institutions; however, they did not implement this approach after receiving feedback from institution-based personnel that it would be too burdensome on them to implement. As of February 2017, according to officials, BOP continues to explore options to collect and analyze utilization data based on the experiences of other health care systems. For example, BOP convened a federal interagency group to discuss how other federal agencies study utilization rates and discussed with officials from the California Department of Corrections the approach those officials have taken. BOP officials recognize the importance of finding a solution to collect utilization data and has identified, and in some cases attempted then abandoned, various possible solutions. However, BOP officials told us that they were not aware of OMB guidance on how to conduct a cost- effectiveness analysis, which describes how to systematically evaluate options, and would review it as they continue to pursue a solution to BOP’s lack of utilization data. OMB Circular A-94 calls for conducting a cost-effectiveness analysis when the benefits from competing alternatives are the same or where a policy decision has been made that the benefits must be provided. By conducting a cost-effectiveness analysis of the potential ways to obtain health care utilization data, BOP will be better positioned to determine the most cost-effective way to collect such data and to start doing so in a timely manner. BOP has not consistently collected and analyzed its institutions’ health care spending data to identify additional cost control opportunities, such as through strategic sourcing. Strategic sourcing is a process that moves an organization away from numerous individual procurements toward a broader, aggregate procurement approach. This broader approach generally begins with an analysis of spending (spend analysis) and the identification of products and services for which a more effective sourcing strategy could be implemented. We have emphasized the importance of conducting a comprehensive spend analysis for strategic sourcing purposes since 2002. The approach provides knowledge about how much is being spent for goods and services, who the buyers are, who the suppliers are, and where the opportunities are to save money and improve performance. BOP officials told us that they do not routinely analyze or review BOP’s spending data to identify new opportunities for cost savings through strategic sourcing. BOP procurement officials told us they search for strategic sources for a health care good or service once their colleagues in HSD notify them that BOP needs to obtain the good or service. BOP officials told us it is time intensive to collect and analyze spending data. Nevertheless, they acknowledged the benefits of strategic sourcing and stated they need to seek additional strategic sourcing opportunities. In fiscal year 2013, BOP identified health care supplies/services as a target area for identifying cost savings opportunities that are likely to produce significant savings through strategic sourcing. Further, according to BOP’s April 2016 Strategic Sourcing Program guidance, the key to the strategic sourcing program is a thorough analysis of spending patterns— or a spend analysis—to determine what is being purchased, how it is being purchased, the dollar value of those purchases, and which vendors are involved. This guidance is consistent with GAO’s prior work on leading commercial practices in strategic sourcing. However, despite this guidance, BOP officials told us that they had not yet reviewed health care spending data and could not identify any BOP official who would have the time to complete the task. Although BOP has not conducted a spend analysis, it has identified and implemented some strategic sourcing opportunities, such as pursuing national contracts so that all institutions can pay for the same goods or services at the same negotiated rates. When national contracts are not in place, each BOP institution generally procures health care goods and services on an individual basis and some institutions have individually procured the same medical care equipment or services at varied costs. For example, several institutions have either purchased or leased one particular brand of robotic equipment to dispense medication as part of their pharmaceutical operations, but costs for the equipment varied across the institutions—one had an initial purchase cost of about $122,000, while another had an initial annual lease cost that was about $43,000. BOP officials also told us that they identify strategic sourcing opportunities through discussions with other agencies and have participated in interagency collaboration on strategic sourcing. According to BOP documentation, BOP does not have a single data system that can provide health care spending data for a spend analysis; however, existing sources, such as the Federal Procurement Data System-Next Generation (FPDS-NG), could assist BOP in its data collection effort. For example, BOP records all contracts with estimated values above a certain threshold (currently $3,500) in the FPDS-NG, a database used by agencies across the federal government to record procurement information. Additionally, according to BOP officials, each institution’s business administrators regularly audit spending data from purchase cards—cards used by government agencies to buy goods or services. One BOP region, for example, created a method to monitor its institutions’ spending through its local accounting program that helped the region identify opportunities to control costs. Officials found, when they began tracking some aspects of institutional spending, that two institutions within that region were spending $5,000-$10,000 more per year on biohazardous waste disposal than other institutions. As a result, officials from the region contacted the institutions’ management and asked them to identify lower cost vendors, which the institutions did. Conducting spend analyses using these existing and readily available data sources could provide BOP with several benefits, including knowledge about how much is being spent for given products and services, who the buyers and suppliers are, and where opportunities exist for BOP to use strategic sourcing to leverage buying and save money. BOP has taken bureau-wide actions to help control health care costs through several initiatives, but has generally not evaluated the effectiveness of these initiatives. We reviewed BOP documents to identify initiatives aimed at health care cost control for the period beginning with fiscal year 2009 through November 2016. Through our review and discussions with BOP officials, as of February 2017, we found that BOP had 10 initiatives aimed at controlling its health care costs (see table 8). The table provides a description of each initiative and status. BOP had also reported six other cost containment initiatives or systems in its fiscal year 2017 Congressional Budget Justification. BOP officials explained that although they had previously publicly reported that these initiatives were designed to contain health care costs, during the course of our review they realized that their primary purpose was instead for clinical or administrative purposes. BOP officials stated that there may be a secondary gain of cost avoidance, but regardless of cost avoidance, they would have carried out the initiatives. BOP reported these cost containment initiatives or systems again in its fiscal year 2018 Congressional Budget Justification (see table 9). Of the 16 initiatives BOP either identified for us (table 8), or reported in its fiscal year 2018 Congressional Budget Justification (table 9), BOP could only provide documentation for 1 cost savings estimate, as we show in table 10. BOP provided documentation demonstrating the analysis for its estimated cost savings for the initiative to contract for its dental lab operations. BOP officials stated that BOP does not have a process to evaluate cost savings and that evaluating all of its initiatives would be very labor intensive. BOP officials also told us that they can sometimes assume cost savings. For example, BOP implemented a system of medical and mental health care levels for its institutions and inmates to match health care needs to health care resources. According to BOP officials, the cost control results of this initiative can be assumed; however, in internal BOP documentation, officials acknowledge that although the system was designed to reduce cost, its results are undetermined. BOP officials stated that BOP does not have an automated data system to collect and analyze cost savings data in this manner. BOP had previously developed a process for collecting and analyzing cost data without automated data systems. Specifically, In 2011, in response to a DOJ OIG recommendation, BOP reported to the OIG that it had established a four-step process to collect and analyze data to determine the cost-effectiveness of current and future health care cost control initiatives for which BOP has or can collect data. As part of the process, BOP reported that it would implement the following four steps: (1) generate an initiative for HSD and/or BOP executive staff approval; (2) identify factors to measure an approved initiative’s outcomes and establish benchmarks; (3) capture relevant and available program and cost data (to the extent possible using existing data systems) on at least an annual basis; and (4) analyze the data and produce cost-benefit reports for HSD leadership. At that time, BOP listed seven initiatives it could evaluate using this process, and it provided evidence to the OIG that it had used this process to evaluate one of those seven initiatives. As a result, an OIG official told us the OIG closed the recommendation. However, BOP officials told us they have not continued to use this process to determine the cost-effectiveness of its initiatives, and do not have a process in place to determine the cost-effectiveness of its initiatives. When we asked BOP officials about the process, as described to the OIG, they told us that they were unaware of its existence until we inquired about its use. Standards for Internal Control in the Federal Government calls for management to design control activities to achieve its objectives, including evaluations to compare actual performance to planned or expected results. By regularly evaluating health care cost control initiatives for which it has or can collect data, BOP would be better positioned to determine if the time and resources it is investing to control costs has been effective, or whether it should alter its path to achieve better outcomes. Some of BOP’s regions and institutions have also undertaken various initiatives that officials described as having a cost control impact. BOP officials told us that regions and institutions make decisions about whether and which initiatives to implement based on institutions’ varied needs and circumstances. For example, BOP encourages institutions to use contract guards, rather than BOP correctional officers, to supervise inmates during treatment in community medical facilities. According to BOP officials from some institutions we visited, the use of contract guards can result in cost savings when compared to overtime costs paid to salaried correctional officers. However, some institution officials told us that security concerns and emergent medical conditions affect their ability to use contract guards. In addition, officials from several of the institutions we visited said they reduce inmate custody and transportation costs by transporting inmates in groups by bus when they require outside health care. An institution’s ability to use busing may depend on inmates’ medical needs or the willingness of community providers to accommodate several inmates, according to officials. Additionally, within individual institutions, some BOP officials have developed innovative cost control initiatives. For example, one official in an institution we visited said he had created metrics to measure and manage health care costs, including pharmaceutical costs, as well as the number of outside medical trips, and outside medical service costs, and then successfully took action to quantify reductions across all three. Also, one regional official said he created a mobile ophthalmology clinic to provide specialty eye services to institutions throughout the region and estimated costs savings in service charges and custody costs for each visit. In another example, officials at one FMC we visited told us they expanded the physical therapy program in various ways to control costs. For example, the program offers its specialized staff to consult on cases in other institutions, utilizes unpaid physical therapy interns, and encourages its staff to specialize to perform procedures that would otherwise take place outside the institution, such as electromyography, according to officials. BOP institutions have also sought to use technology to control health care costs. Notably, officials at FMC Lexington said that they have a partnership with an outside medical services provider, which provides the institution with some advanced telehealth equipment (see fig. 6). This has allowed it to expand its use of telehealth to more than twenty medical specialties. For fiscal year 2015, FMC Lexington officials estimated a cost savings of over $1.5 million due to their use of telehealth. Many institutions have also sought to acquire other kinds of equipment to control health care costs. For example, some institutions have purchased or leased robotics to support their pharmaceutical operations (see fig. 7), which has reduced the burden on staff to manually fill prescriptions and improve operational efficiency. According to BOP officials, the use of robotics equipment in its institutions’ pharmacies allows BOP pharmacists to devote more time to improving patient outcomes by providing clinical pharmacist services. Officials in another FMC we visited told us they purchased a pressure mapping system for the institution’s wound program to identify and prevent ulcers, which institution officials estimate has saved nearly $2.8 million for the period covering fiscal year 2009 through 2015. According to officials, institutions share and learn about one another’s approaches to health care cost control through various platforms. For example, BOP compiles its institutions’ reported cost efficiencies and innovations, including those related to health care, in a catalogue that is designed to be a reference to all institution managers regarding the activities of their peers. Institution officials also share information on health care cost control through regular conferences and meetings. For example, officials at one institution we visited stated that they employ the practice of having an institution nurse check on hospitalized inmates on a weekly basis to determine if they can return to the institution to complete their care in order to avoid custody and hospitalization costs. The officials using this practice said they presented it at a BOP symposium where officials from other institutions also presented their best health care practices. Officials also told us that as BOP officials transfer from one institution to another, they can also transfer information about health care cost control. BOP’s long-standing strategic planning process has focused in some part on health care cost control, but BOP’s overall planning practices have not incorporated certain elements of sound planning that we have previously identified. These elements generally call for the identification of objectives with a means to measure (1) progress toward objectives and (2) the effectiveness of activities to achieve objectives. They also call for the identification of resources and investments (as shown in table 11). As part of its annual BOP-wide strategic plans, BOP developed the HSD Cost Efficiency and Innovation strategic objective to “maximize health care resources as a cost-containment strategy by applying evidence- based business practices and measuring performance through the use of appropriate industry-wide metrics.” However, this objective does not include measures of effectiveness or progress, such as milestones or performance measures, which, when properly supported by reliable data, are critical to effectively measuring improvement. BOP established this objective in 2011 with six underlying activities it refers to as action plans. BOP officials told us that a strategic objective is considered achieved when the action plans are completed. However, simply completing an action plan does not ensure progress toward the achievement of the objective. For example, one of BOP’s action plans under its HSD Cost Efficiency and Innovation strategic objective was to develop a methodology for ensuring health care expenditures reflect actual costs. BOP established nine metrics to evaluate financial performance and marked this action plan as “complete” in 2013. However, as of August 2016, BOP officials told us that they have not been able to collect comprehensive data for six of the nine metrics because of limitations to BOP’s existing data systems or incomplete reporting. Moreover, BOP officials recognized the issues with this action because they stated that they are reconsidering the metrics they initially established. Thus, while BOP determined it had completed this action plan, the action itself did not produce any meaningful results. Although it is important for BOP to plan the actions or activities that it will take to achieve its objectives, it cannot rely only on the completion of activities to determine that it is achieving them. Without a means to measure progress toward its objective or effectiveness of its activities, BOP cannot reliably determine whether it is achieving its objective or that its efforts are effective. Determining whether or not BOP is making progress toward its HSD Cost Efficiency and Innovation strategic objective and whether its efforts are effective will also allow BOP management to determine what midcourse corrections may be needed in order to meet the objective. In addition to the annual BOP-wide strategic plan, HSD established a five- year HSD Integrated Strategic Plan for the period of 2015 through 2019, which applies a financial management focus to every HSD branch and section. The financial management goal, which is the same for every HSD branch and section, is to “effectively manage the Health Services budget through an ongoing analysis of staffing, program efficiencies, and utilization of services to identify opportunities for current and future cost containment.” Although HSD includes the financial management goal for each HSD branch or section, as well as the branch or section’s implementation strategies and expected outcomes, the expected outcomes are generally not measurable. For example, the description of the financial management goal for HSD’s Health Services Branch is to “employ new strategies to contain health care costs while maintaining quality of services” and the expected outcome is “BOP health care costs are strategically contained.” HSD does not indicate how it will measure progress toward its larger goal, and its expected outcome provides no means of measuring progress. By stating that it aims to contain costs strategically, but not including performance measures to assess the effectiveness of activities—HSD’s planning efforts do not incorporate the elements of sound planning we have noted. Out of the 17 HSD branches and sections included in the HSD integrated strategic plan, only one— Pharmacy—included target benchmarks for financial management. BOP officials told us that because BOP’s per capita inmate health care costs are lower than the national health expenditure average per capita costs, BOP believes its efforts have successfully achieved cost savings. However, BOP’s health care obligations exceeded $1.3 billion in fiscal year 2016, which is a significant portion of its nearly $6.9 billion appropriation, and its per capita health care obligations have continued to rise, particularly since 2013, which highlight the importance of controlling health care costs regardless of societal trends in health care spending. Developing goals and objectives that include a means to measure progress would better position BOP management to assess its efforts to control health care costs and ensure these efforts are effective in achieving desired results. As described previously, sound planning also calls for the identification of necessary resources and investments. BOP officials acknowledged that they do not systematically plan health care cost savings initiatives. As a result, BOP has not consistently identified the necessary resources and investments needed to implement its cost savings initiatives, or the external factors that could affect the achievement of its goals prior to implementation. For example, since 2011, BOP has planned to establish a mail-order or central fill pharmacy. BOP asserted that establishing a mail-order or central fill pharmacy would help it maximize pharmacy resources by consolidating prescription drugs into one main inventory. BOP reported in its Congressional Budget Justification for fiscal year 2017 that this effort could save BOP $10 million per year in inventory costs. To implement this initiative, BOP began restructuring its pharmaceutical operations at its care level 1 institutions, with plans to continue doing so with care level 2 institutions. Two years later, in 2013, BOP noted that the costs of this initiative would exceed available agency funding and BOP would have to request additional funding from the Congress to support the initiative. When we asked BOP officials why the initiative had not yet been implemented, they told us that in any given year when BOP sought to implement this initiative, it had either the funding or the space, but not both at the same time. Because BOP did not consider necessary resources and investments when this initiative was planned this initiative was not implemented earlier. Without consistently identifying the investments and resources needed, BOP risks expending unnecessary time and limited resources pursuing cost control initiatives that eventually may fail to achieve their goal of cost control. BOP employs a process every 3 to 4 years that it calls its “mission analysis,” to assess how effectively the care it provides meet the needs of inmates having complex medical and mental health disorders in its FMCs. BOP officials told us that they make decisions about how to allocate resources based on mission analyses; however, BOP does not document these analyses and therefore lacks a record to support decision-making for its resource allocations. During mission analysis, BOP seeks to determine how effectively its health care programs, resources, and community-based services meet the needs of inmates, determine priorities, assess potential efficiencies, and make recommendations for change. To conduct a mission analysis, BOP also conducts a cost assessment, which officials described as including a review of outside medical trips, contracted services, elective procedures, and overtime for escorted medical trips to community-based facilities and providers. BOP officials stated that the cost analysis team also reviews relevant documents with the local staff responsible for these activities, and gives an oral report once it concludes its work. As a result of these mission analyses, BOP institutions have adjusted or added new health care programs. For example, FMC Butner officials stated that after a mission analysis was completed there, BOP saw the need for, and subsequently established, hospice care services. Also, according to BOP, the mission analysis can provide officials with the information needed to consolidate or shift resources from one care level 4 institution to another. BOP recently made this kind of shift in resources in December 2016 when it converted Federal Correctional Institution Fort Worth to a Federal Medical Center and has already requested 36 full-time equivalents and over $4.7 million to expand and renovate the facility. When we asked BOP for more information on how they conduct the mission analysis process or examples of written materials that guided their decision making, BOP officials told us they only document what participants consider the highlights in an executive summary. For example, the 2016 mission analysis for FMC Rochester that BOP provided to us was an overview of its health care services and staffing, and recommendations for consideration. While the summary included recommendations, it did not include the analysis and findings to support the recommendations. BOP officials stated that the mission analysis was an opportunity for Central Office, region and institution leadership to meet and verbally discuss the status of the institution’s missions, challenges, resources and ongoing activities. Nevertheless, Standards for Internal Control in the Federal Government calls for documentation as a necessary part of an effective internal control system, which is required to demonstrate its design, implementation and operating effectiveness. Documenting its analyses and findings could help provide reasonable assurance that recommendations to shift resources at its higher care level institutions are based on sound evidence. According to BOP data, those higher care level institutions account for a sizable portion of medical services costs. Specifically, BOP’s six care level 4 institutions—prior to adding Fort Worth to this list—accounted for about $350 million (or about 30 percent) of BOP’s medical services costs for fiscal year 2015 while housing about seven percent of the inmate population. Providing health care, including medical, dental, and mental health care, to the federal inmate population is an important and required part of BOP’s broader mission to safely, humanely, and securely confine offenders in prisons. While BOP’s inmate population has fallen since fiscal year 2014, health care costs have continued to increase in total, and on an annual per capita basis, due in part to factors that BOP cannot control, such as the aging inmate population, increasing pharmaceutical costs, and increasing costs of medical care in the community. Given the fiscal pressures facing BOP, as well as the rest of the government, it is critical that the agency focus its efforts on factors within its sphere of influence to ensure the prudent use of resources. In many ways, BOP’s efforts to understand and control its costs have been thwarted by limited data or limited applications of data already available. Although BOP recognizes the need for health care utilization data, and has identified a number of options for collecting these data, it has not yet assessed the most cost effective approach for obtaining such data. Conducting a cost-effectiveness analysis of the competing alternative solutions, and taking steps toward implementation of the most effective solution, would allow BOP to dedicate its resources judiciously. Further, while BOP has engaged other federal agencies to leverage purchasing power, it has not consistently collected and analyzed health care spending data across its institutions. Conducting a comprehensive spend analysis could help BOP identify additional strategic sourcing opportunities to acquire medical goods and services more efficiently. In addition, BOP has undertaken various initiatives to control health costs but does not have assurance that these initiatives are achieving their cost control aim because it has not evaluated its initiatives on a regular basis. BOP has also not incorporated certain elements of sound planning into its strategic health care plans, including identifying a means to measure progress toward goals and objectives and a means to measure effectiveness of activities. BOP also has not identified the resources and investments needed to implement its initiatives, an additional element of sound planning that can help ensure successful implementation. By incorporating these sound planning elements, BOP could enhance its planning and implementation efforts before expending resources, better positioning itself for success as it aims to control health care costs. Finally, when BOP sets out to make decisions about how to shift or consolidate resources—as it does when conducting its mission analyses—it does not document its analyses and findings to support its recommendations for decisions about resources. Strengthening this process would enhance its internal control system and better support the decisions it makes. We recommend that the Director of BOP take the following five actions: 1. To better understand the available opportunities for collecting inmate health care utilization data, BOP should conduct a cost-effectiveness analysis of potential solutions, and take steps toward implementation of the most effective solution. 2. To better understand the available opportunities for controlling health care costs, BOP should implement its guidance to conduct “spend analyses” of BOP’s health care spending, using data sources already available. 3. To determine the actual or likely effectiveness of its ongoing or planned health care cost control initiatives, BOP should evaluate the extent to which its initiatives achieve their cost control aim. 4. To enhance its strategic planning for and implementation of health care cost control efforts, BOP should incorporate elements of a sound planning approach and establish a means of measuring progress toward and effectiveness of its activities for its current strategic objectives and goals related to controlling health care costs; and identify the resources and investments necessary for implementation of its planned health care cost control initiatives. 5. To improve the reliability and utility of its Federal Medical Center mission analyses, BOP should document the analyses and findings that underlie its recommendations. We provided a draft of this report to DOJ for review and comment. DOJ did not provide official written comments to include in this report. However, in an e-mail received on June 20, 2017, a DOJ official stated that BOP concurred with all five recommendations. BOP also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Attorney General, selected congressional committees, and other interested parties. In addition, this report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any further questions about this report, please contact me at (202) 512-8777 or goodwing@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributions to this reported are listed in appendix IV. Our objectives for this report were to examine: (1) how much the Bureau of Prisons (BOP) has obligated for inmate health care from fiscal years 2009 through 2016 and the factors that affect BOP’s costs, (2) the extent to which BOP has data available to understand and help control its costs, and (3) the initiatives BOP has identified and implemented to help control health care costs and the extent to which BOP has effectively planned its health care cost control efforts. To examine how much BOP has obligated for inmate health services over the past 8 fiscal years and the factors that have affected BOP’s costs, we interviewed officials from BOP’s Administration Division, Health Services Division (HSD), Psychology Services Branch, and Field Acquisition Office, as well as officials within BOP’s six regional offices and at the 10 individual BOP institutions we visited, to understand why obligations have changed over time and the factors that have impacted those changes. We selected the 10 BOP institutions to visit based in part on factors that both BOP and existing research indicated could be affecting costs, as well as other factors that allowed for variation in our sample. These factors included the institutions’ medical care levels; their total and per capita medical services costs; the characteristics of the institutions’ population, including gender and percentage of inmates age 55 or older; and geography. The 10 institutions we visited encompassed all four medical care levels, had a range of per capita medical services costs from low to high, included both male and female populations, housed lower and higher percentages of inmates age 55 or older, were located in remote and metropolitan areas, and covered all six of BOP’s regions. Although not generalizable, the visits provided important insight into how different institution and inmate characteristics impact costs. In addition, we reviewed key documents, such as BOP’s Congressional Budget Justifications for fiscal years 2009 through 2018, BOP’s Annual Financial Statements, and the Department of Justice’s Financial Management Information System Sub-Object Classification (SOC) Code Guide. We also analyzed BOP obligation data from fiscal year 2009 to 2016 on medical services, psychology services, drug and sex offender treatment programs, and medical staff training. We included 8 years of obligations data in order to observe trends over time in health care costs. We decided to be inclusive of all categories of health care that are considered to be essential to treating health, which includes mental health and substance use disorder treatment. To determine the per capita obligations, we divided the total obligation by the inmate population at the end of each fiscal year. To adjust the per capita obligations for inflation, we used fiscal year 2016 as the baseline and adjusted each prior year to 2016 dollars by the Bureau of Economic Analysis and the IHS Global Insight Outlook inflation factor. To better understand the composition of medical services obligations, we analyzed the breakout of these obligations by SOC code. Specifically, we used the SOC Code Guide to understand the various codes and how they could be grouped. We settled on five overarching categories: (1) salaries and benefits for medical staff, as well as related employee expenses, such as work travel; (2) outside medical care, which includes contractual medical services provided both inside and outside of BOP institutions by non-BOP medical staff; (3) supplies and materials, which includes pharmaceutical purchasing and other materials used in the provision of health services; (4) equipment of a durable nature, such as tools and implements, machinery, and information technology hardware; and (5) other, for all obligations that did not fit into the other categories, such as transportation of things, rent, communications, utilities, printing, and insurance claims. To better understand the changes in pharmaceutical obligations over our time period, and what illnesses were impacting those obligations, we analyzed BOP’s list of top 50 medications for fiscal years 2009 through 2016, and compared the medications to the U.S. National Library of Medicine’s MedlinePlus list of drugs and supplements in order to determine the uses. We compiled a list of medications used to treat hepatitis C, cancer, human immunodeficiency virus/acquired immune deficiency syndrome (HIV/AIDS), and psychotropic medications, which are generally used to treat mental illness. The remaining medications were used to treat a variety of illnesses, or were too few to create a category. We therefore created a category of other for the remaining medications. To assess the reliability of BOP’s obligations and expenditures data, we (1) performed electronic data testing and looked for obvious errors in accuracy and completeness, and (2) interviewed agency officials knowledgeable about BOP’s budget to determine the processes in place to ensure the integrity of the data. We determined that the data were sufficiently reliable for the purposes of this report. Finally, since geographic location may impact the provision or cost of inmate health care, we analyzed BOP institutions’ proximity to hospitals. To do this, we used geographic information services software to map the addresses of all BOP institutions, and verify how many hospitals were within a 20 mile radius of the institution. We created four categories— institutions that had no hospitals within 20 miles, institutions that had 1 to 5 hospitals within 20 miles, institutions that had 6 to 10 hospitals with 20 miles, and institutions that had 11 or more hospitals within 20 miles. To examine the extent to which BOP has data available to understand and help control its costs, the initiatives BOP has implemented and identified to help control health care costs, and how effectively BOP has planned and implemented these initiatives, we reviewed relevant BOP program statements. We conducted interviews with BOP officials from the Health Services, Reentry Services, and Information, Policy, and Public Affairs Divisions, as well as the Office of General Counsel in BOP’s Central Office, given their responsibilities in this arena. We also interviewed officials from each of the six BOP regions and the Field Acquisition Office. Further, we reviewed current Office of Management and Budget (OMB) and BOP policies on cost-effectiveness analysis to analyze BOP’s efforts to gather data. Additionally, for background on correctional health care costs and BOP health care, we reviewed articles and reports from various organizations, including the Congressional Budget Office, Congressional Research Service, and several academic and research institutes and interviewed researchers and economists from some of them to understand their methodologies and explore their findings. Further, we interviewed two correctional health clinicians from non-federal medical institutions due to their knowledge of the aging inmate population and telehealth. To determine the extent of available data on BOP health care costs, we searched the Federal Procurement Data System-Next Generation (FPDS-NG) for BOP contract actions by institution for fiscal years 2009 through 2015 and obtained pharmaceutical obligations data from BOP for that same period. Based on these steps, we determined the data were sufficiently reliable for the purposes of our reporting objective. In addition to interviews, to determine how well BOP’s health care cost control planning mechanisms work, we obtained and reviewed selected portions of the annual BOP-wide strategic plans, the BOP’s HSD Governing Board Meeting Minutes through 2015, and the HSD Strategic Plan for 2015 through 2019. To identify BOP’s health care cost control efforts, their status of implementation, and the extent to which BOP had conducted cost estimates for each, we relied on the testimonial and documentary evidence the aforementioned BOP officials provided, reviewed previous GAO reports, Department of Justice (DOJ) Office of Inspector General (OIG) reports, and BOP Congressional Budget Justifications for fiscal years 2009 through 2018. We summarized this information into a data collection instrument for verification by BOP officials and requested additional data and supporting documentation for each. We reviewed the information BOP provided in the data collection instrument to identify missing or unclear information and resubmitted the data collection instrument for BOP’s secondary verification. We also interviewed officials from DOJ OIG and DOJ’s Justice Management Division to discuss prior report recommendations on BOP health care costs. Further, at the 10 institutions we visited, we interviewed institution officials and observed health care programs and cost control efforts. We also requested and obtained several documents relating to the costs and cost savings of regional and institutional initiatives to control health care costs. We conducted this performance audit from February 2016 to June 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. BOP has seven care level 4 institutions, also referred to as Federal Medical Centers (FMC), spread out throughout the United States. The FMCs provide care to inmates in need of more advanced medical or mental health care. Following are descriptions for each FMC. FMC Butner (Butner, North Carolina) is part of an FCC and serves as a major medical and psychiatric referral center for male inmates. FMC Butner has all specialty areas of medicine and is the primary referral center for oncology, providing chemotherapy and radiation therapy. FMC Butner also manages a broad range of subacute and chronically ill inmates and has an orthopedic surgery program available. Dialysis services are provided on-site. Butner also has an extensive inpatient forensics program. As of May 2017, there were 927 inmates at FMC Butner. FMC Carswell (Fort Worth, Texas) serves as the major medical and psychiatric referral center for female inmates. All specialty areas of medicine are available through in-house staff and community-based consultant specialists. As of May 2017, there were 1,171 inmates at FMC Carswell. FMC Devens (Devens, Massachusetts) serves both medical and mental health care needs for male inmates. All specialty areas of medicine are available through in-house staff and community-based consultant specialists. Additional services provided include dialysis treatment for inmates with end-stage renal failure. As of May 2017, there were 956 inmates at FMC Devens. FMC Fort Worth (Fort Worth, Texas) officially began its medical center mission on December 7, 2016. When fully operational, FMC Fort Worth will serve both medical and mental health needs of male inmates. The institution will expand from 36 to 72 medical beds having 24-hour nursing care, and will include a 21-bed inpatient forensics unit and a care level 3 Mental Health Step Down Unit Program. BOP anticipates that all necessary building renovations and hiring of medical center staff will be completed within 24 months of its formal conversion to a medical center mission. As of May 2017, there were 1,492 inmates at FMC Fort Worth. FMC Lexington (Lexington, Kentucky) serves male inmates. All specialty areas of medicine are available by in-house staff and community-based consultant specialists. FMC Lexington serves as the primary referral center for inmates with most types of leukemia and lymphoma, and performs outpatient forensic studies. As of May 2017, there were 1,356 inmates at FMC Lexington. FMC Rochester (Rochester, Minnesota) serves as a major medical and mental health referral center for male inmates. FMC Rochester is the primary referral center for inmates with end-stage liver disease and advanced HIV infection, as well as other infectious diseases requiring long-term management. FMC Rochester provides extensive psychiatric and psychology services, including inpatient psychiatry services and forensic studies. As of May 2017, there were 665 inmates at FMC Rochester. U.S. Medical Center for Federal Prisoners (MCFP) Springfield (Springfield, Missouri) serves as a major medical and psychiatric referral center for male inmates. MCFP Springfield provides all specialty areas of medicine through in-house staff and community- based consultant specialists, and is the primary referral center for high security inmates. The institution maintains extensive psychiatric and psychological services, to include inpatient forensic studies. It is the major kidney dialysis center for the BOP. As of May 2017, there were 1,031 inmates at MCFP Springfield. In addition to the contact named above, Joy Booth (Assistant Director); Valerie Kasindi (Analyst-in-Charge); Dina Shorafa; Sara Rizik; Lori Achman; Pedro Almoguera; Willie Commons, III; Eric Hauswirth; Susan Hsu; Amanda Miller; Claire Peachey; and William T. Woods made key contributions to this report .
As of June 2017, BOP was responsible for the custody and care—including health care—of about 154,000 inmates housed in BOP institutions. Health care includes medical, dental, and psychological treatment. BOP provides most care inside its institutions, but transports inmates outside when circumstances warrant. GAO was asked to review health care costs at BOP institutions. This report addresses: (1) BOP's costs to provide health care services and factors that affect costs; (2) the extent to which BOP has data to help control health care costs; and (3) the extent to which BOP has planned and implemented cost control efforts. GAO analyzed BOP health care obligations data for fiscal years 2009 through 2016, gathered information on BOP's health care cost control initiatives through a data collection instrument, and reviewed BOP's health care related strategic plans. GAO also interviewed BOP officials and visited 10 BOP institutions, selected in part, for total and per capita medical services costs. From fiscal years 2009 through 2016, the Bureau of Prisons (BOP) obligated more than $9 billion for the provision of inmate health care and several factors affected these costs. Obligations for health care rose from $978 million in fiscal year 2009 to $1.34 billion in fiscal year 2016, an increase of about 37 percent. On a per capita basis, and adjusting for inflation, health care obligations rose from $6,334 in fiscal year 2009 to $8,602 in fiscal year 2016, an increase of about 36 percent. BOP cited an aging inmate population, rising pharmaceutical prices, and increasing costs of outside medical services as factors that accounted for its overall costs. BOP lacks or does not analyze certain health care data necessary to understand and control its costs. For example, while BOP's data can show how much BOP is spending overall on health care provided inside and outside an institution, BOP lacks utilization data, which is data that shows how much it is spending on individual inmate's health care or how much it is expending on a particular health care service. BOP has identified potential solutions for gathering utilization data, but has not conducted a cost-effectiveness analysis of these solutions to identify the most effective solution. BOP also does not analyze health care spending data, i.e., what its institutions are buying, from whom, and how much they spend. BOP has pursued some opportunities to control its health care spending through interagency collaboration and national contracts, but it has not conducted a spend analysis to better understand trends. Doing so would provide BOP with better information to acquire goods and services more strategically. BOP has initiatives aimed to control health care costs but could better assess effectiveness and apply a sound planning approach. Since 2009, BOP has implemented or planned a number of initiatives related to health care cost control, but has not evaluated their cost-effectiveness. Further, BOP has engaged in a strategic planning process to help control costs, but has not incorporated certain elements of a sound planning approach, such as developing a means to measure progress toward its objectives and identifying the resources and investments needed for its initiatives. By incorporating these elements, BOP could enhance its planning and implementation efforts before expending resources, better positioning itself for success as it aims to control health care costs. GAO is making five recommendations, including that BOP conduct a cost-effectiveness analysis to identify the most effective method to collect health care utilization data; conduct a spend analysis of health care spending data; evaluate cost control initiatives; and enhance its planning efforts by incorporating elements of a sound planning approach. BOP concurred with the recommendations.
The Cassini Program, sponsored by NASA, the European Space Agency, and the Italian Space Agency, began in fiscal year 1990. NASA’s Jet Propulsion Laboratory (JPL), which is operated under contract by the California Institute of Technology, manages the Cassini Program. The spacecraft is expected to arrive at Saturn in July 2004 and begin a 4-year period of scientific observations to obtain detailed information about the composition and behavior of Saturn and its atmosphere, magnetic field, rings, and moons. Power for the Cassini spacecraft is generated by three radioisotope thermoelectric generators (RTG) that convert heat from the natural radioactive decay of plutonium dioxide into electricity. The spacecraft also uses 117 radioisotope heater units to provide heat for spacecraft components. The spacecraft carries 72 pounds of radioactive plutonium dioxide in the RTGs and 0.7 pounds in the heater units. The Department of Energy (DOE) provided the RTGs and their plutonium dioxide fuel, and the Department of Defense (DOD) provided the Titan IV/Centaur rocket to launch the spacecraft. According to NASA and JPL officials, most deep space missions beyond Mars, including the Cassini mission, must use RTGs to generate electrical power. The only proven non-nuclear source of electrical power for spacecraft are photovoltaic cells, also called solar arrays. However, as distance from the sun increases, the energy available from sunlight decreases exponentially. Thus, existing solar arrays cannot produce sufficient electricity beyond Mars’ orbit to operate most spacecraft and their payloads. Before launching a spacecraft carrying radioactive materials, regulations implementing federal environmental laws require the sponsoring agency, in this instance NASA, to assess and mitigate the potential risks and effects of an accidental release of radioactive materials during the mission. As part of any such assessments, participating agencies perform safety analyses in accordance with administrative procedures. To obtain the necessary presidential approval to launch space missions carrying large amounts of radioactive material, such as Cassini, NASA is also required to convene an interagency review of the nuclear safety risks posed by the mission. RTGs have been used on 25 space missions, including Cassini, according to NASA and JPL officials. Three of these missions failed due to problems unrelated to the RTGs. Appendix I describes those missions and the disposition of the nuclear fuel on board each spacecraft. The processes used by NASA to assess the safety and environmental risks associated with the Cassini mission reflected the extensive analysis and evaluation requirements established in federal laws, regulations, and executive branch policies. For example, DOE designed and tested the RTGs to withstand likely accidents while preventing or minimizing the release of the RTG’s plutonium dioxide fuel, and a DOE administrative order required the agency to estimate the safety risks associated with the RTGs used for the Cassini mission. Also, federal regulations implementing the National Environmental Policy Act of 1969 required NASA to assess the environmental and public health impacts of potential accidents during the Cassini mission that could cause plutonium dioxide to be released from the spacecraft’s RTGs or heater units. In addition, a directive issued by the Executive Office of the President requires an ad hoc interagency Nuclear Safety Review Panel. This panel is supported by technical experts from NASA, other federal agencies, national laboratories, and academia to review the nuclear safety analyses prepared for the Cassini mission. After completion of the interagency review process, NASA requested and was given nuclear launch safety approval by the Office of Science and Technology Policy, within the Office of the President, to launch the Cassini spacecraft. In addition to the risks associated with a launch accident, there is also a small chance that the Cassini spacecraft could release nuclear material either during an accidental reentry into Earth’s atmosphere when the spacecraft passes by Earth in August 1999 or during the interplanetary journey to Saturn. Potential reentry accidents were also addressed during the Cassini safety, environmental impact, and launch review processes. DOE originally developed the RTGs used on the Cassini spacecraft for NASA’s previous Galileo and Ulysses missions. Figure 1 shows the 22-foot, 12,400-pound Cassini spacecraft and some of its major systems, including two of the spacecraft’s three RTGs. DOE designed and constructed the RTGs to prevent or minimize the release of plutonium dioxide fuel from the RTG fuel cells in the event of an accident. DOE performed physical and analytical testing of the RTG fuel cells known as general-purpose heat source units, to determine their performance and assess the risks of accidental fuel releases. Under an interagency agreement with NASA, DOE constructed the RTGs for the Cassini spacecraft and assessed the mission risks as required by a DOE administrative order. DOE’s final safety report on the Cassini mission, published in May 1997, documents the results of the test, evaluation, and risk assessment processes for the RTGs. The RTG fuel cells have protective casings composed of several layers of heat- and impact-resistant shielding and a strong, thin metal shell around the fuel pellets. According to NASA and DOE officials, the shielding will enable the fuel cells to survive likely types of launch or orbital reentry accidents and prevent or minimize the release of plutonium dioxide fuel. In addition to the shielding, the plutonium dioxide fuel itself is formed into ceramic pellets designed to resist reentry heat and breakage caused by an impact. If fuel is released from an impact-damaged fuel cell, the pellets are designed to break into large pieces to avoid inhalation of very small particles, which is the primary health risk posed by plutonium dioxide. Federal regulations implementing the National Environmental Policy Act of 1969 required NASA to prepare an environmental impact statement for the Cassini mission. To meet the requirements NASA conducted quantitative analyses of the types of accidents that could cause a release of plutonium dioxide from the RTGs and the possible health effects that could result from such releases. NASA also used DOE’s RTG safety analyses and Air Force safety analyses of the Titan IV/Centaur rocket, which launched the Cassini spacecraft. NASA published a final environmental impact statement for the Cassini mission in June 1995. In addition to the analyses of potential environmental impacts and health effects, the document included and responded to public comments on NASA’s analyses. NASA also published a final supplemental environmental impact statement for the Cassini mission in June 1997. According to NASA officials, NASA published the supplemental statement to keep the public informed of changes in the potential impacts of the Cassini mission based on analyses conducted subsequent to the publication of the final environmental impact statement. The supplemental statement used DOE’s updated RTG safety analyses to refine the estimates of risks for potential accidents and document a decline in the overall estimate of risk for the Cassini mission. The environmental impact assessment process for the Cassini mission ended formally in August 1997 when NASA issued a Record of Decision for the final supplemental environmental impact statement. However, if the circumstances of the Cassini mission change and affect the estimates of accident risks, NASA is required to reassess the risks and determine the need for any additional environmental impact documentation. Agencies planning to transport nuclear materials into space are required by a presidential directive to obtain approval from the Executive Office of the President before launch. To prepare for and support the approval decision, the directive requires that an ad hoc Interagency Nuclear Safety Review Panel review the lead agencies’ nuclear safety assessments. Because the Cassini spacecraft carries a substantial amount of plutonium, NASA convened a panel to review the mission’s nuclear safety analyses. NASA formed the Cassini Interagency Nuclear Safety Review Panel shortly after the program began in October 1989. The panel consisted of four coordinators from NASA, DOE, DOD, the Environmental Protection Agency, and a technical advisor from the Nuclear Regulatory Commission. The review panel, supported by approximately 50 technical experts from these and other government agencies and outside consultants, analyzed and evaluated NASA, JPL, and DOE nuclear safety analyses of the Cassini mission and performed its own analyses. The panel reported no significant differences between the results of its analyses and those done by NASA, JPL, and DOE. The Cassini launch approval process ended formally in October 1997 when the Office of Science and Technology Policy, within the Executive Office of the President, gave its nuclear launch safety approval for NASA to launch the Cassini spacecraft. NASA officials told us that, in deciding whether to approve the launch of the Cassini spacecraft, the Office of Science and Technology Policy reviewed the previous NASA, JPL, DOE, and review panel analyses and obtained the opinions of other experts. NASA, JPL, and DOE used physical testing and computer simulations of the RTGs under accident conditions to develop quantitative estimates of the accident probabilities and potential health risks posed by the Cassini mission. To put the Cassini risk estimates in context, NASA compares them with the risks posed by exposure to normal background radiation. In making this comparison, NASA estimates that, over a 50-year period, the average person’s risk of developing cancer from exposure to normal background radiation is on the order of 100,000 times greater than from the highest risk accident for the Cassini mission. For the launch portion of the Cassini mission, NASA estimated that the probability of an accident that would release plutonium dioxide was 1 in 1,490 during the early part of the launch and 1 in 476 during the later part of the launch and Earth orbit. The estimated health effect of either type of accident is that, over the succeeding 50-year period, less than one more person would die of cancer caused by radiation exposure than if there were no accident. Although the Titan IV/Centaur rocket is the United States’ most powerful launch vehicle, it does not have enough energy to propel the Cassini spacecraft on a direct route to Saturn. Therefore, the spacecraft will perform two swingby maneuvers at Venus in April 1998 and June 1999, one at Earth in August 1999, and one at Jupiter in December 2000. In performing the maneuvers, the spacecraft will use the planets’ gravity to increase its speed enough to reach Saturn. Figure 2 illustrates the Cassini spacecraft’s planned route to Saturn. NASA estimates that there is less than a one in one million chance that the spacecraft could accidentally reenter Earth’s atmosphere during the Earth swingby maneuver. To verify the estimated probability of an Earth swingby accident, NASA formed a panel of independent experts, which reported that the probability estimates were sound and reasonable. If such an accident were to occur, the estimated health effect is that, during the succeeding 50-year period, 120 more people would die of cancer than if there were no accident. If the spacecraft were to become unable to respond to guidance commands during its interplanetary journey, the spacecraft would drift in an orbit around the sun, from which it could reenter Earth’s atmosphere in the future. However, the probability that this accident would occur and release plutonium dioxide is estimated to be one in five million. The estimated health effect of this accident is the same as for an Earth swingby accident. Due to the spacecraft’s high speed, NASA and DOE projected that an accidental reentry during the Earth swingby maneuver would generate temperatures high enough to damage the RTGs and release some plutonium dioxide. As a safety measure, JPL designed the Earth swingby trajectory so that the spacecraft will miss Earth by a wide margin unless the spacecraft’s course is accidently altered. About 50 days before the swingby, Cassini mission controllers will begin making incremental changes to the spacecraft’s course, guiding it by Earth at a distance of 718.6 miles. According to NASA and JPL officials, the Cassini spacecraft and mission designs incorporate other precautions to minimize the possibility that an accident could cause the spacecraft to reenter during either the Earth swingby maneuver or the interplanetary portion of its journey to Saturn. NASA regulations require that, as part of the environmental analysis, alternative power sources be considered for missions planning to use nuclear power systems. JPL’s engineering study of alternative power sources for the Cassini mission concluded that RTGs were the only practical power source for the mission. The study stated that, because sunlight is so weak at Saturn, solar arrays able to generate sufficient electrical power would have been too large and heavy for the Titan IV/Centaur to launch. The studies also noted that, even if the large arrays could have been launched to Saturn on the Cassini spacecraft, they would have made the spacecraft very difficult to maneuver and increased the mission’s risk of failure due to the array’s uncertain reliability over the length of the 12-year mission. Figure 3 compares the relative sizes of solar arrays required to power the Cassini spacecraft at various distances from the sun, including Saturn. Since 1968, NASA, DOE, and DOD have together invested more than $180 million in solar array technology, according to a JPL estimate. The agencies are continuing to invest in improving both solar and nuclear spacecraft power generation systems. For example, in fiscal year 1998, NASA and DOD will invest $10 million for research and development of advanced solar array systems, and NASA will invest $10 million for research and development of advanced nuclear-fueled systems. NASA officials in charge of developing spacecraft solar array power systems said that the current level of funding is prudent, given the state of solar array technology, and that the current funding meets the needs of current agency research programs. The fiscal year 1998 budget of $10 million for solar array systems exceeds the estimated 30-year average annual funding level of $6 million (not adjusted for inflation). According to NASA and JPL officials, solar arrays offer the most promise for future non-nuclear-powered space missions. Two improvements to solar array systems that are currently being developed could extend the range of some solar array-powered spacecraft and science operations beyond the orbit of Mars. New types of solar cells and arrays under development will more efficiently convert sunlight into electricity. Current cells operate at 18 to 19 percent efficiency, and the most advanced cells under development are intended to achieve 22 to possibly 30 percent efficiency. Although the improvement in conversion efficiency will be relatively small, it could enable some spacecraft to use solar arrays to operate as far out as Jupiter’s orbit. Another improvement to solar arrays under development will add lenses or reflective surfaces to capture and concentrate more sunlight onto the arrays, enabling them to generate more electricity. NASA’s technology demonstration Deep Space-1 spacecraft, scheduled for launch in July 1998, will include this new technology. Over the long term, limitations inherent to solar array technology will preclude its use on many deep space missions. The primary limitation is the diminishing energy in sunlight as distance from the sun increases. No future solar arrays are expected to produce enough electricity to operate a spacecraft farther than Jupiter’s orbit. Another key limitation is that solar arrays cannot be used for missions requiring operations in extended periods of darkness, such as those on or under the surface of a planet or moon. Other limitations of solar arrays, including their vulnerability to damage from radiation and temperature extremes, make the cells unsuitable for missions that encounter such conditions. NASA and DOE are working on new nuclear-fueled generators for use on future space missions. NASA and DOE’s Advanced Radioisotope Power Source Program is intended to replace RTGs with an advanced nuclear-fueled generator that will more efficiently convert heat into electricity and require less plutonium dioxide fuel than existing RTGs. NASA and DOE plan to flight test a key component of the new generator on a space shuttle mission. The test system will use electrical power to provide heat during the test. If development of this new generator is successful, it will be used on future missions. NASA is currently studying eight future space missions between 2000 and 2015 that will likely use nuclear-fueled electrical generators. These missions are Europa Orbiter, Pluto Express, Solar Probe, Interstellar Probe, Europa Lander, Io Volcanic Observer, Titan Organic Explorer, and Neptune Orbiter. On the basis of historical experience, NASA and DOE officials said that about one-half of such missions typically obtain funding and are launched. In addition, several planned Mars missions would carry from 5 to 30 radioisotope heater units to keep spacecraft components warm. Each heater unit would contain about 0.1 ounces of plutonium dioxide. In accordance with NASA’s current operating philosophy, spacecraft for future space science missions will be much smaller than those used on current deep space missions. Future spacecraft with more efficient electrical systems and reduced demands for electrical power, when coupled with the advanced nuclear-fueled generators, will require significantly less plutonium dioxide fuel. For example, the new nuclear-fueled generator that NASA studied for use on the Pluto Express spacecraft is projected to need less than 10 pounds of plutonium dioxide compared with 72 pounds on the Cassini spacecraft. According to NASA and DOE officials, spacecraft carrying much smaller amounts of radioactive fuel will reduce human health risks because it is anticipated that less plutonium dioxide could potentially be released in the event of an accident. NASA and JPL officials also pointed out that planned future missions may not need to use Earth swingby trajectories. Depending on the launch vehicle used, the smaller spacecraft planned for future missions may be able to travel more direct routes to their destinations without the need to use Earth swingby maneuvers to increase their speed. In written comments on a draft of this report, NASA said that the report fairly represents NASA’s environmental and nuclear safety processes for the Cassini space mission (see app. II). In addition, NASA and DOE also provided technical and clarifying comments for this report, which we incorporated as appropriate. To obtain information about the processes used by NASA to assess the safety and environmental risks of the Cassini mission, NASA’s efforts and costs to develop non-nuclear power sources for deep space missions, and future space missions for which nuclear-fueled power sources will be used, we interviewed officials at NASA Headquarters in Washington, D.C.; JPL in Pasadena, California; and DOE’s Office of Nuclear Energy, Science, and Technology in Germantown, Maryland. We reviewed the primary U.S. legislation and regulations applicable to the use of nuclear materials in space and NASA, JPL, and DOE documents pertaining to the safety and environmental assessment processes that were used for the Cassini mission. We reviewed the Cassini Safety Evaluation Report prepared by the Cassini Interagency Nuclear Safety Review Panel. We also reviewed NASA and JPL documents on the development of improved non-nuclear and nuclear electrical power sources for spacecraft and studies for future nuclear-powered space missions. We did not attempt to verify NASA and DOE estimates of risks associated with the Cassini mission or the financial and other data provided by the agencies. We performed our work from September 1997 to February 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Director of the Office of Management and Budget, the Administrator of NASA, the Secretary of Energy, and appropriate congressional committees. We will also make copies available to other interested parties on request. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. Major contributors to this report are Jerry Herley and Jeffery Webster. Since 1961 the United States has launched 25 spacecraft with radioisotope thermoelectric generators (RTG) on board. Three of the missions failed, and the spacecraft reentered Earth’s atmosphere. However, none of the failures were due to problems with the RTGs. In 1964, a TRANSIT 5BN-3 navigational satellite malfunctioned. Its single RTG, which contained 2.2 pounds of plutonium fuel, burned up during reentry into Earth’s atmosphere. This RTG was intended to burn up in the atmosphere in the event of a reentry. In 1968, a NIMBUS-B-1 weather satellite was destroyed after its launch vehicle malfunctioned. The plutonium fuel cells from the spacecraft’s two RTGs were recovered intact from the bottom of the Santa Barbara Channel near the California coast. According to National Aeronautics and Space Administration (NASA) and Department of Energy (DOE) officials, no radioactive fuel was released from the fuel cells, and the fuel was recycled and used on a subsequent space mission. Figure I.1 shows the intact fuel cells during the underwater recovery operation. In 1970, the Apollo 13 Moon mission was aborted due to mechanical failures while traveling to the moon. The spacecraft and its single RTG, upon return to Earth, were jettisoned into the Pacific Ocean, in or near the Tonga Trench. According to DOE officials, no release of radioactive fuel was detected. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the use of nuclear power systems for the Cassini spacecraft and other space missions, focusing on: (1) the processes the National Aeronautics and Space Administration (NASA) used to assess the safety and environmental risks associated with the Cassini mission; (2) NASA's efforts to consider the use of a non-nuclear power source for the Cassini mission; (3) the federal investment associated with the development of non-nuclear power sources for deep space missions; and (4) NASA's planned future nuclear-powered space missions. GAO noted that: (1) federal laws and regulations require analysis and evaluation of the safety risks and potential environmental impacts associated with launching nuclear materials into space; (2) as the primary sponsor of the Cassini mission, NASA conducted the required analyses with assistance from the Department of Energy (DOE) and the Department of Defense (DOD); (3) in addition, a presidential directive required that an ad hoc interagency panel review the Cassini mission safety analyses; (4) the directive also required that NASA obtain presidential approval to launch the spacecraft; (5) NASA convened the required interagency review panel and obtained launch approval from the Office of Science and Technology Policy, within the Office of the President; (6) while the evaluation and review processes can minimize the risks of launching radioactive materials into space, the risks themselves cannot be eliminated, according to NASA and Jet Propulsion Laboratory (JPL) officials; (7) as required by NASA regulations, JPL considered using solar arrays as an alternative power source for the Cassini mission; (8) engineering studies conducted by JPL concluded that the solar arrays were not feasible for the Cassini mission primarily because they would have been too large and heavy and had uncertain reliability; (9) during the past 30 years, NASA, DOE, and DOD have invested over $180 million in solar array technology, the primary non-nuclear power source; (10) in FY 1998, NASA and DOD will invest $10 million to improve solar array systems, and NASA will invest $10 million to improve nuclear-fueled systems; (11) according to NASA and JPL officials, advances in solar array technology may expand its use for some missions; however, there are no currently practical alternatives to using nuclear-fueled power generation systems for most missions beyond the orbit of Mars; (12) NASA is planning eight future deep space missions between 2000 and 2015 that will likely require nuclear-fueled power systems to generate electricity for the spacecraft; (13) none of these missions have been approved or funded, but typically about one-half of such planned missions are eventually funded and launched; (14) advances in nuclear-fueled systems and the use of smaller, more efficient spacecraft are expected to substantially reduce the amount of nuclear fuel carried on future deep space missions; and (15) thus, NASA and JPL officials believe these future missions may pose less of a health risk than current and prior missions that have launched radio isotope thermoelectric generators into space.
The receipt, processing, and retrieval of vast quantities of paper forms and documents is one of IRS’s most critical problems. IRS annually receives over 200 million tax returns with multiple attachments, about 1 billion information documents (for example, W2s and 1099s), and several hundred million pieces of taxpayer correspondence. To process this enormous volume of paperwork, IRS uses labor-intensive processes and systems to (1) convert data from tax returns into machine usable form, (2) maintain taxpayer accounts, including current and historical data, (3) ensure refunds are prompt, and (4) prepare bills for tax payments due. Retrieving paper forms and documents involves over 1.2 billion tax returns stored in over 1 million square feet of space. Also, IRS collects most of the government’s revenue, currently over $1.25 trillion annually, and it employs over 113,000 people, more than any other civilian agency. IRS is headquartered in Washington, D.C., and has 7 regional offices, 63 district offices, 10 service centers, and 2 computer centers. Upon receipt at IRS’s service centers, paper-based tax returns and related supporting and information documents are manually extracted from envelopes, sorted, batched, coded, and transcribed into electronic format. The service centers send electronically formatted data to IRS’s main computer center in Martinsburg, West Virginia. IRS stores nearly all the paper supplied by taxpayers as part of, or in support of, their tax filings. Tax return processing at IRS service centers was designed in the late 1950s. Today, nearly 4 decades later, IRS still processes tax return data using the processes instituted when automated systems were first installed in the service centers. In today’s technological climate, taxpayers have come to expect faster, better, more convenient service in virtually every facet of their lives. To meet these expectations, IRS’s outdated tax processes and systems are being used to electronically capture and provide more and more information. At the same time, the number of tax-related documents is greatly expanding. Between the late 1960s and the early 1980s, IRS began several efforts to modernize its operations. These efforts did not succeed, and on numerous occasions the Congress expressed concern about the cost of the redesign efforts, the inadequacy of security controls over taxpayer information, the lack of clear management responsibility for the programs, and the paucity of technical and managerial expertise. In late 1986, IRS produced plans for a new modernization effort, known today as Tax Systems Modernization (TSM). IRS estimates that TSM could cost between $8 billion and $10 billion through 2001. Through fiscal year 1995, IRS will have spent or obligated $2.5 billion for TSM, which comprises 36 systems development projects. About $1.1 billion more has been requested for fiscal year 1996. IRS has developed a business vision to guide its modernization efforts. This vision calls for a work environment that is virtually paper-free, where taxpayer account updates are rapid and taxpayer information is readily available to IRS employees for purposes such as customer service and compliance activities. IRS’s overall redesign of its tax processing system is key to achieving this vision. An important component of the redesign is maximizing the receipt of electronic information to reduce the receipt of paper documents. IRS plans, for example, to expand the electronic receipt of tax returns. However, IRS believes the requirement to process large volumes of paper documents will exist for the foreseeable future. As a result, IRS is designing the Document Processing System to scan paper documents and electronically capture data for subsequent processing and retrieval at workstations. This system will require staff using personal computers to correct and add data that the system cannot accurately capture from paper documents. Like its electronic filing system counterparts, the Document Processing System is to capture 100 percent of the numeric data submitted on tax returns, compared to about 40 percent captured from paper returns today. Throughout the modernization, we have reported on critical issues related to the need to build an effective organization structure for managing technology; problems in developing specific TSM systems and the reliability of reported weak internal, computer security, and fraud controls; and antiquated systems that were not designed to provide the meaningful and reliable financial information needed to effectively manage and report on IRS’s operations. Because of problems such as these, in February 1995, we designated TSM a high-risk systems modernization effort. In general, these major efforts experience cost over-runs, are prone to delays, and often fail to meet intended mission objectives. Appendix II is a list of our prior reports and testimonies pertinent to TSM. Our objective was to review the business and technical practices IRS has established to develop, manage, and operate its information systems and, in particular, the TSM initiative. We examined IRS’s business strategy for reducing paper tax return submissions, strategic information management processes, software development capability, systems development accountability and responsibility. To review IRS’s business strategy for reducing paper tax return submissions, we interviewed IRS officials who have responsibility for submission processing and electronic filing. We analyzed various task force studies on electronic filing and summaries of issues compiled by an IRS task team charged with promoting electronic filing. In addition, we examined IRS internal audit reports on the performance and development of systems designed to handle paper returns, reports of problems from the service centers responsible for processing tax returns, and a risk assessment and critical design review of operational and developmental systems. Further, we reviewed project plans and technical charters for paper processing systems, and we discussed systems requirements and performance test results with the contractor developing the Document Processing System. To assess IRS’s strategic information management processes, we interviewed IRS officials who have responsibility for systems development. We also analyzed IRS planning documents, including IRS’s Business Master Plan, Future Concept of Operations, and Integrated Transition Plan and Schedule. We obtained and analyzed IRS documentation and task force studies related to (1) planning and managing information technology, (2) analyzing systems development costs and benefits, (3) reengineering business processes, and (4) training staff in the use of new information technology. In analyzing IRS’s strategic information management practices, we drew heavily from our research on the best practices of private and public sector organizations that have been successful in improving their performance through strategic information management and technology. These fundamental best practices are discussed in our report, Executive Guide: Improving Mission Performance Through Strategic Information Management and Technology (GAO/AIMD-94-115, May 1994), and our Strategic Information Management (SIM) Self-Assessment Toolkit (GAO/Version 1.0, October 28, 1994, exposure draft). To evaluate IRS’s software development capability, we validated IRS’s August 1993 assessment of its software development maturity based on the Capability Maturity Model (CMM) developed in 1984 by the Software Engineering Institute at Carnegie Mellon University. CMM establishes standards in key software development processing areas and provides a framework to evaluate a software organization’s capability to consistently and predictably produce high-quality products. We discussed with IRS software development officials IRS’s CMM rating and actions initiated to improve it. We also identified and assessed IRS’s initiatives to improve software development capability in key process areas, including (1) requirements management, (2) project planning, tracking, and oversight, and (3) configuration management. In another key process area, software quality assurance, we examined, in particular, IRS’s use of metrics to control software development projects. To assess IRS’s technical infrastructures, we discussed security and data standards with systems architects and technical specialists. In addition, we obtained and analyzed integrated systems architecture documents; systems development documents for security and data standards; and project plans, quality measurement plans, and technical charters for all TSM projects. To assess accountability and responsibility for developing systems, we identified the IRS organizational components involved in developing and operating information systems. We discussed with IRS’s Modernization Executive, Chief Information Officer, and research and development division officials their respective systems development roles, responsibilities, and accountability. We performed our work at IRS headquarters in Washington, D.C., and at facilities in Cincinnati, Ohio, and Nashville, Tennessee. On April 28, 1995, we briefed the IRS Commissioner and other senior IRS executives on the results of our review and made recommendations to them for overcoming the management and technical problems impeding successful systems modernization efforts. Our work was performed between February 1995 and June 1995, in accordance with generally accepted government auditing standards. IRS provided written comments on a draft of this report, which are included as appendix I. IRS is currently drowning in paper—a serious problem IRS can mitigate only through electronic tax filings. But IRS will not achieve the full benefits that electronic filing can provide because it does not have a comprehensive business strategy to reach or exceed its electronic filing goal, which is 80 million electronic filings by 2001. Today, IRS’s estimates and projections for individual and business returns suggest that, by 2001, as few as 39 million returns may be submitted electronically, less than half of IRS’s goal. Maximizing electronic filings is important because tax returns filed electronically do not have to move through IRS’s labor-intensive operations. Paper filings have to be opened, sorted, reviewed, transcribed, shipped and stored, and then physically retrieved if IRS employees later need data on the returns that are not transcribed. IRS recognizes that increasing the number of electronic filings is essential to both improve its tax return processing and advance toward the virtually paperless environment envisioned by IRS under TSM. Creating a paperless environment, though, will involve making significant changes to improve IRS’s information management and will require new processes and new ways of doing business. Private and public sector organizations that have successfully improved their performance have found that to move away from the status quo, an organization must recognize opportunities to change and improve its fundamental business processes. Without well-conceived business strategies to capitalize on opportunities, meaningful change may be slow, the quality of service may not improve, and modernization may be impossible. Consequently, one of IRS’s most pressing modernization issues is the efficient processing of vast quantities of information received on tax returns, which in 1994, amounted to about 205 million returns. In 1995, IRS expects total tax returns from individuals and businesses to increase by 2 million, and by 2001, to reach 224 million filings. To help process its avalanche of paperwork more efficiently, in 1990, IRS introduced nationwide electronic filing to selected groups of taxpayers as a means of using modern technology to streamline its business processes. Looking to the future, IRS set a goal to receive 80 million tax filings electronically by 2001. IRS based this goal, which accounts for about 35.7 percent of all tax filings expected in 2001, on a projection of electronic filing of 70 million individual returns and 10 million business returns. In working toward this goal, in 1994, about 16 million tax returns, or 7.8 percent of all returns, were filed electronically, with about 50 percent of these being 1040A forms. In 1995, IRS expects that electronic filings will decrease to about 15 million, or 7.2 percent of all tax returns. On the basis of the current rate of electronic filings for individuals, IRS now estimates that in 2001 only about 29 million electronic returns will by filed by individuals. Combined with the projected 10 million electronic filings from businesses, IRS may receive only 39 million electronic returns in 2001. This is only about 17.4 percent of the 224 million tax returns anticipated in 2001, less than half of IRS’s goal. Table 2.1 summarizes IRS’s electronic filing activity for 1994 and projections for the future. IRS’s current business strategy focuses primarily on promoting faster refunds to clients of businesses that prepare and electronically transmit tax returns. Tax return preparers and transmitters do not pay a fee to IRS for electronic filings, but they charge a fee to taxpayers. Consequently, IRS’s business strategy for promoting electronic filing is directed primarily at taxpayers who file using third parties, are willing to pay to file electronically, file simple tax returns, and are due refunds. IRS has no comprehensive business strategy for promoting the benefits of electronic filing to other taxpayers. In doing this, IRS should consider all segments of the taxpaying population, including those who (1) are unwilling to pay for tax preparer and transmitter services, (2) owe IRS for balances due, and (3) file complex tax returns. These taxpayers represent considerable potential for making substantially greater use of electronic filing. Moreover, IRS is not taking advantage of opportunities afforded by personal computers to increase electronic filings. In recent years, these computers have become a common fixture in many households. In this regard, when personal computers are used to prepare tax returns, taxpayers who are not willing to pay commercial transmitting fees must print their electronically produced returns on paper and mail them to IRS to be manually processed. This results in the redundant, counterproductive conversion of the same data by both taxpayers and IRS: taxpayers convert electronic data to paper returns, IRS then laboriously converts information on the paper returns back to electronic data. Unless IRS attracts all potential electronic filers, it will never achieve its vision of virtually paperless processing and will be forced to process increasingly large workloads of paper tax returns. Further, IRS’s paper processing systems are not planned to accommodate the large volume of paper returns that will result if taxpayers file fewer returns electronically. For example, IRS is designing the Document Processing System for use at five service centers based on the assumption that, by 2001, at least 61 million of the 224 million returns will be filed electronically, that is, 163 million paper returns will be processed through the Document Processing System. As table 2.1 shows, by 2001, since only 39 million tax returns may be filed electronically, 185 million taxpayers could submit paper returns, or about 22 million more returns than IRS is designing the Document Processing System to process. Thus, IRS’s most recent estimates on individual filings for 2001 indicate that IRS may fall far short of its electronic filing goal, which will result in an increasing struggle to process paper filings. To better achieve its virtually paperless processing environment, we recommend that IRS refocus its electronic filing business strategy to target, through aggressive marketing and education, those sectors of the taxpaying population that can file electronically most cost beneficially. IRS agreed with our recommendation regarding its electronic filing strategy. IRS said it has convened a working group, chaired by the electronic filing executive, to develop a detailed, comprehensive strategy to broaden public access to electronic filing, while also providing more incentives for practitioners and the public to file electronically. IRS said the strategy will include approaches for taxpayers who are unwilling to pay for tax preparer and transmitter services, who owe IRS for balances due, and/or who file complex tax returns. IRS said further that the strategy will also address that segment of the taxpaying population that would prefer to file from home by personal computers. We believe that, by developing a more comprehensive electronic filing strategy, IRS will help to maximize the benefits possible through greater use by taxpayers of electronic filing. These benefits are central to more efficiently processing the vast quantities of information IRS receives on tax returns and, thus, to achieve the virtually paperless tax processing environment IRS hopes to attain through modernization. IRS is not yet effectively using a strategic information management process to plan, build, and operate its information systems. TSM has been underway for almost a decade and will require years of further development effort and substantial human and financial resources. IRS, however, does not yet have in place an effective process for selecting, prioritizing, controlling, and evaluating the progress and performance of major information systems investments. A sound strategic information management process involves several fundamental practices: (1) applying strategic planning, (2) managing information technology as investments, (3) analyzing costs and benefits and measuring performance, (4) using business process analysis, and (5) upgrading skills and training. This process focuses on results and emphasizes simplifying and redesigning complex mission processes, which is essential to meeting mission goals and satisfying customers’ needs. IRS recognizes the importance to TSM’s success of implementing a sound strategic information management process and has assessed its strategic information management using GAO’s strategic information management self-assessment toolkit. IRS’s self-assessment identified improvements for better managing information systems. We too found serious shortcomings that underscore the urgency of IRS bolstering the strategic information management process it has begun. We also identified IRS efforts to upgrade skills and training. Although IRS has developed several types of plans for carrying out its current and future operations, these plans are neither complete nor consistent. Moreover, IRS’s various planning documents are not linked to each other or to TSM budget requests. Even though TSM has been underway for 10 years, complete, clear, and concise planning for TSM and its multibillion dollar investment is not evident. As a result, it is difficult for IRS to identify and effectively focus on completing priority aspects of TSM. Public and private sector organizations that have been successful in developing major systems have found that, to be successful, once the organization has made a serious commitment to change its management of information and technology, it is paramount to adopt a strategic planning approach. Their experience is that strategic business and information system plans must have a tight link to mission goals and must be predicated on satisfying explicit, high-priority customer needs. This orientation helps to ensure that information technology projects are delivered on time and within budget and that they produce meaningful improvements in cost, quality, or timeliness of service. We identified several different efforts by IRS to prepare plans to delineate a vision for the future and actions required to realize that vision. These planning documents include the Business Master Plan, which reflects the business priorities set by IRS’s top executives and links IRS’s strategic objectives and business vision with the tactical actions needed to implement them; the IRS Future Concept of Operations, which articulates IRS’s future business vision so that the Congress, IRS employees, and the public can see and better understand IRS’s plans for serving the public; and the Integrated Transition Plan and Schedule, which provides a top-level view of the modernization program’s tasks, activities, and schedules and is the primary tool used for accountability for delivering the products and services necessary to implement modernization. We found, however, that these documents are incomplete and inconsistent. For example, as of May 1995, 4 volumes of the 10 volume IRS Future Concept of Operations had not been completed. These volumes covered (1) national and regional offices, (2) workload distribution management, (3) area distribution centers, and (4) process flows. While the six completed volumes include critical areas, the incomplete documents are necessary for a comprehensive vision of IRS’s future operations. Also, of the 27 action items identified in the Business Master Plan that relate to information systems, 15 could not be identified in the Integration Transition Plan and Schedule. Further, the Business Master Plan’s actions and performance measures have not been changed to reflect recent electronic filing trends, which indicate that IRS will fall far short of its electronic filing goal. We found other indications of weak planning processes as well. Specifically, IRS did not have a fully integrated planning and budgeting process for TSM, although the Office of Economic Analysis is moving in that direction. For example, this office is developing a new TSM cost model for IRS. Steps such as this are positive because a strong tie between TSM plans and IRS budgets will be especially important to ensure that information is available to IRS managers and the Congress to show TSM’s future funding needs and the results of past investments. While IRS has undertaken fundamental TSM planning, stronger overall strategic planning for TSM is still needed. This would involve (1) defining the information technology capabilities required to support reengineered business processes, (2) identifying, assessing, and mitigating the risks involved in developing both TSM as a whole, as well as individual component projects, (3) formulating schedules and milestones for development, and (4) allocating needed resources. Currently, IRS does not have a process to manage TSM information systems projects as investments, even though IRS expects the government’s past and future investment in TSM to exceed $8 billion. Foremost, at the time of our review, IRS lacked comprehensive decision criteria for controlling and evaluating TSM projects throughout their life cycles. When organizations use strategic information management best practices, they manage information systems projects as investments, rather than expenses. These organizations view projects as efforts to improve mission performance, not as efforts to implement information technology. For public and private sector organizations that have been successful in developing major systems, the basis for making decisions on information technology investments has been an explicit set of criteria that are used to evaluate the expected mission benefits, potential risks, and estimated cost of each project. This investment focus systematically reduces inherent risks while maximizing benefits of complex projects. IRS maintains that all TSM projects have equal priority and must be completed or the modernization will fail. An “all-or-nothing” approach to large information technology projects is usually unrealistic and generally unattainable. Instead, a reasoned and an explicit framework for managing information technology investments is essential. IRS currently holds program control meetings to assess and control information technology. However, these meetings have generally focused on the costs and implementation schedules of individual projects, rather than on comprehensively evaluating and prioritizing risks and returns expected from these investments. Instead of using explicit criteria to measure risks and returns, IRS evaluates each project’s progress using a time-line. At the completion of our review, IRS had developed draft criteria for TSM projects. These criteria included risk and return factors (e.g., cost, project size, and mission benefit), which it plans to use for the first time during top management’s review of the fiscal year 1997 budget. However, these factors were not defined so they could be used consistently to assess projects. For instance, IRS characterized project size as small, medium, large, and very large, but did not quantify these terms. Similarly, IRS has not yet defined decision criteria and quantifiable measures to assess mission benefits, risk, and cost, all of which are important to enable IRS managers to adequately select, control, and evaluate information systems projects. IRS is currently developing better decision criteria. Managing TSM as an investment would require IRS to assess, prioritize, control, and evaluate its investment in current and planned TSM information technology projects based on explicit and consistently applied decision criteria. By adopting this approach, top management’s attention would be drawn to assessing and managing risk and making the tradeoffs between continued funding of existing operations and developing new capabilities. Most important, with a disciplined process, IRS could promptly identify, and thus avoid investing in, higher-risk projects that have little potential to provide significant mission benefits. Moreover, this would reenforce accountability for improved performance. Contrary to best practices used by leading private and public organizations, IRS’s TSM costs and benefits analysis is inadequate. As a result, IRS and the Congress do not know whether TSM information systems projects will really make a difference. Until an adequate analysis is performed and measures are defined, IRS will not know whether investments in TSM are worthwhile. In January 1995, IRS advised the House Budget Committee that, including operating costs for the next 10 years, TSM will cost about $13 billion and will provide over $32 billion in benefits. However, IRS’s overall cost projection is unreliable for several reasons. For example, IRS based the projection on an October 1992 TSM cost model, which IRS did not adequately update to reflect systems that have since been added to TSM, IRS’s more recent business visions, and changes in TSM systems development methods. The benefits estimate also had shortcomings. For instance, in some cases, IRS attributed to TSM the savings associated with reducing staff resources; in other cases, IRS computed benefits based on additional revenues expected if staff were reassigned to tax collection. Although a decision to use these staff for collections may increase revenue, the additional staff—not the system—will provide this benefit. This point becomes clear when the following scenarios are considered: (1) IRS could assign additional staff to collections independent of the information system and (2) if IRS reassigns to other nonrevenue producing activities the staff years saved, the revenue benefits would evaporate even though the information system would not change. A convincing benefits analysis for a system must compare operational costs with and without the system, other variables being held constant. IRS recognizes that it has not adequately assessed TSM costs and benefits and is currently working with a contractor on an economic analysis to better reflect the cost and benefits of TSM. IRS expects another cost and benefit analysis to be completed by September 1995. We will continue to monitor IRS’s progress in analyzing TSM cost and benefits. After many automated systems design and development efforts were already underway, IRS started business process reengineering, which involves critically reexamining core business processes and redesigning them to achieve significantly better performance. Compounding this problem is IRS’s lack of a comprehensive plan and schedule defining how and when to integrate these business reengineering efforts with on-going TSM projects. Organizations that successfully develop systems do so only after analyzing and redesigning critical business processes. Information systems projects that do not consider business process redesign typically fail or reach only a fraction of their potential. Accomplishing significant improvement in performance nearly always requires streamlining or redesigning critical work processes. IRS has identified six core business areas and defined 11 business processes that support these areas. Of these 11, 3 were selected to begin reengineering efforts. Those selected for initial redesign are (1) processing returns, (2) responding to taxpayers, and (3) enforcement actions. Overall, we found IRS’s reengineering methods to be consistent with generally recognized business process reengineering principles. IRS had, for example, assessed some existing data on customer values, analyzed current processes, and designed target processes and plans to validate the target designs. Further, IRS has a project management structure consisting of process owners, an executive steering committee, project managers, cross-functional teams, and contractor support to ensure that all stakeholders can participate. However, these efforts are not yet complete, and IRS did not assess the actual steps needed to implement these efforts. IRS officials acknowledge that reengineering efforts began after the start of many TSM systems projects. Until reengineering is sufficiently completed to drive TSM projects, there is no assurance that the projects will achieve the desired business objectives and result in improved operations. IRS is currently reassessing its skill and competency base to ensure that its personnel and training programs will meet future needs. Operating and maintaining progressively sophisticated systems, such as those comprising TSM, requires continuously higher skill levels and updated knowledge—an additional critical factor for success, according to best-practice organizations. Antiquated skill bases can inhibit an organization’s ability to change. IRS has several initiatives planned and underway to upgrade the skills of its personnel. For example, IRS has defined positions needing competency assessments; plans to assess staff skills using competency assessment instruments, which are currently being developed; and is reorganizing and strengthening its training program by establishing a Corporate Education unit. We are currently assessing IRS’s human resource planning for modernization and will continue to monitor progress in this area. To address IRS’s strategic information management weaknesses, we recommend that the IRS Commissioner take immediate action to implement a complete process for selecting, prioritizing, controlling, and evaluating the progress and performance of all major information systems investments, both new and ongoing, including explicit decision criteria. We also recommend that IRS use these criteria to review all planned and ongoing systems investments by June 30, 1995. Meeting this time frame is important so that the Congress has a sound basis for determining IRS’s fiscal year 1996 appropriations. IRS agreed with our recommendations to improve its strategic information management. In addition, IRS said that it had recently completed a self-assessment of its practices compared to GAO’s best practices for strategic information management. According to IRS, its self-assessment confirmed GAO’s findings and will help strengthen IRS’s overall response to GAO’s concerns. In response to our recommendations, IRS said that it will continue to work on simplifying and ensuring the consistency of all its has initiated a priority setting process for meeting business needs through has developed an initial set of investment evaluation criteria for use as part of an ongoing process to evaluate spending plans for information systems; has completed a comprehensive review of the proposed fiscal year 1996 budget for TSM, which will enable IRS to rescope its program objectives, set priorities, and adjust funding levels for TSM; will continue to refine the investment evaluation criteria and also institutionalize a formal process based on the use of this criteria; and is developing and implementing the use of an information technology investments alternative to select, prioritize, control, and evaluate information technology investments to achieve reengineered program missions. Actions such as these could provide IRS the underpinnings it needs for strategic information management. IRS indicated that progress toward implementing these improvements will be monitored by the IRS’s Associate Commissioner. We believe that this is essential to ensure prompt and effective implementation. Regarding a cost and benefits analysis, IRS said that the September 1995 analysis will address the costs and benefits of TSM and allow IRS to identify and focus on competing priorities. In particular, IRS expects the new analysis to reflect a much more extensive benefit estimate than IRS currently has available. We believe an adequate cost and benefits analysis will help IRS to know whether investments in TSM are worthwhile. Regarding skills and training, IRS said that it is taking steps to ensure that personnel and training programs meet future needs, especially those relating to information systems. These steps include (1) establishing a training steering committee to consolidate all information systems training currently underway, with the goal of increasing the skill level of IRS employees and (2) identifying job requirements for information systems professionals, which IRS will use in developing training and education programs that are directly linked to mission needs and critical occupational performance goals. Although IRS is in the process of identifying job requirements, we believe that, until reengineering is complete, IRS can only incorporate prototype job requirements into its training and development efforts. In addition, IRS’s current plans do not address how job requirements created as a result of reengineering efforts will be incorporated into its training environment. IRS’s software development activities are inconsistent and poorly controlled because IRS has few detailed procedures for its engineers to follow in developing software. IRS’s software development deficiencies can greatly affect the quality, timeliness, and cost-effectiveness of TSM. Unless IRS improves its software development capability, it is unlikely to build TSM timely or economically and systems are unlikely to perform as intended. To assess its software capability, in September 1993, IRS rated itself against a Capability Maturity Model (CMM) designed by the Software Engineering Institute, a nationally recognized authority in the area. IRS found that, even though TSM is a world-class undertaking, its software development capability is immature. IRS placed its software development capability at the lowest level, described as ad hoc and sometimes chaotic and indicating significant weaknesses in software development capability. Realizing that its software development capability needed improvement, IRS initiated process action teams to address software development weaknesses in key process areas. These teams have made varying degrees of progress to improve IRS’s software development capability and define uniform procedures in the key process areas. Their progress notwithstanding, substantial additional improvement is necessary before IRS’s software development capability can be upgraded to at least the next CMM level, where its activities would be more disciplined and considered to be repeatable. Whether software development is done by IRS, which has nearly 2,000 people working in the area, or by contractors, mature software development capabilities are key to quality, timely, and cost-effective TSM software development. Closely associated with one key software development process area, software quality assurance, is the use of software metrics, which are numerical measures used to predict an aspect of software quality. In this regard, we found that IRS has not adequately defined a suite of metrics. Moreover, IRS is not consistently or effectively using even its limited metrics for assessing the quality of software development projects throughout their life cycles. The Software Engineering Institute was established at Carnegie Mellon University in 1984 primarily to address the Defense Department’s software development problems. In 1991, the Institute developed CMM for use by organizations to evaluate their capability to consistently and predictably produce high-quality software. Table 4.1 describes CMM’s five maturity levels. IRS rated itself at CMM level 1 because its assessment showed significant weaknesses in all key process areas prescribed for an organization to attain a level 2 capability. The key process areas designated by the Institute as necessary to reach CMM level 2 include (1) requirements management, (2) software project planning, (3) software project tracking and oversight, (4) software quality assurance, and (5) software configuration management. Further, the National Research Council also identified IRS’s software development weaknesses and, in its Fall 1994 report on TSM, stated that IRS needed to develop a mature software development organization. The Council reported that, compared to accepted modern standards, IRS’s internal development capability is largely out of date and rudimentary. To improve its software development capability and attain a higher CMM rating, the IRS Information Systems Organization’s Applications Design and Development Management group initiated five process action teams to address the weaknesses identified by IRS’s assessment and the National Research Council’s review. Table 4.2 identifies the teams and describes the key process areas each was to address. The following discussion highlights the work of these teams, which we found in various stages of completion. Although the teams have generally made progress, IRS’s software development capabilities remain weak in each of the key process areas they were to address. The Requirements Management team (1) studied and flow charted the process for requesting information services and (2) generated and is delivering related training materials. However, the requirements management process developed by the team is currently being applied to only legacy systems (i.e., existing IRS systems). An equivalent requirements management process for TSM systems was still under development. Also, customer involvement with the team’s requirements management process has been limited. The Software Quality Assurance team adopted the peer review portion of a planning, review, and inspection process developed by IRS’s Quality Assurance Group. The team is applying this process to selected projects and has developed training for using the process, which IRS is giving to its systems engineers. However, IRS has not yet decided whether to conduct the team’s peer review approach on all projects. Also, IRS has yet to define detailed procedures for performing other software quality assurance functions, such as (1) ensuring compliance of software products and processes with defined standards, (2) independent verification of product quality, (3) periodic audits and reviews by the Software Quality Assurance group, and (4) feedback of the software quality assurance activities and findings to facilitate improvement of the process. The Project Planning and Tracking team selected a software tool for planning and tracking the progress of software development projects. Because the team did not prepare guidelines specifying the minimum planning and tracking elements to apply to projects, project managers who use the software must define the details to track. As a result, this tool is being inconsistently used and, thus, IRS has been unable to consistently track the progress of their projects. The Testing team has issued guidance on unit testing. However, there are no procedures for systems and acceptance testing. The Configuration Management team is waiting for configuration management of the corporate-level to be defined in order to define lower-level processes and procedures. The only configuration management in place is version control of software. As a result, important items are not yet under configuration management, including documentation, and software development folders. Although the teams have made progress, their accomplishments have not significantly improved IRS’s software development capability. Foremost, IRS has not developed and implemented consistent guidelines and procedures in the key process areas essential at CMM level 2. Unless IRS’s weaknesses in software quality assurance and software configuration management are corrected, IRS faces a much greater risk of extensive rework, schedule slippage, and cost overruns in developing software. This risk is present whether IRS or a contractor develops TSM software. In this regard, to effectively oversee a contractor’s work to develop software, and thereby help to ensure prompt and successful completion of the software, it is important for IRS’s software project managers to understand the practices needed to develop software at CMM level 2. To further mitigate the risk of potential problems in developing software under contracts, it is critical that IRS’s software development contractors not be at CMM level 1. IRS does not, however, require all of its software development contractors to be at least at CMM level 2. Although not a specific key process area for rating an organization’s software capabilities, it is nonetheless crucial that a set of quality indicators, and their associated measures, called metrics, be used to assess the quality of software development throughout a project. IRS has not yet effectively established such a measurement process. Early detection and avoidance of problems and control of software development projects are possible through the collection, validation, and analysis of metrics, which are numerical measures presumed to predict an aspect of software quality. Useful metrics include numbers of defects found at various stages of development, costs to repair defects, and the extent of test coverage. Basically, metrics, such as the number and frequency of errors associated with a specific section of software, are taken to analyze the quality of software. Such analyses can identify situations where quality is unacceptable or questionable. In this way, the metrics are validated against quality factors throughout a software development project. According to IRS officials responsible for software development, IRS has not yet defined a complete suite of metrics to be used in the software development program to assess the on-going quality of TSM projects. IRS’s present use of metrics allows for only one type of metric, collectively called function points. Even so, IRS’s use of function points for assessing all software development projects is inconsistent, and IRS does not have a firm schedule for full implementation throughout the agency. In addition to function points, the following metrics, would, at a minimum, also be necessary: (1) complexity, (2) personnel and effort, (3) problems/defects by development phase, and (4) cost per defect. Further, IRS’s use of function points does not trace back to quality improvement goals derived from IRS’s business objectives. In this regard, IRS could use the following metrics to measure software attributes related to business goals: Fewer product defects found by customers. Earlier identification and correction of defects. Fewer defects introduced during development. Faster time to market. Better predictability of project schedules and resources. Without clearly establishing a suite of metrics that trace back to business objectives through quality improvement goals, and that are implemented organizationwide in a uniform and consistent manner, IRS will be hampered in assessing the progress and quality of its software projects. Moreover, the absence of a suite of metrics makes it difficult for IRS to identify the reasons certain software development practices perform well while others perform poorly. Metrics, therefore, when used organizationwide in developing software, would provide IRS a means to better ensure uniform software development, thus avoiding the potential for repeating problems that could be costly and time-consuming to correct. To address IRS’s software development weaknesses, we recommend that the IRS Commissioner immediately require that all future contractors who develop software for the agency have a software development capability rating of at least CMM level 2. To further upgrade IRS’s software development capability, we also recommend that the Commissioner take action before December 31, 1995, to define, implement, and enforce a consistent set of requirements management procedures for all TSM projects that goes beyond IRS’s current request for information services process, and for software quality assurance, software configuration management, and project planning and tracking and define and implement a set of software development metrics to measure software attributes related to business goals, such as those outlined in this chapter. Completing these actions by the end of 1995 is essential so that the Congress, in monitoring TSM’s progress and acting on TSM budget requests, has assurance that IRS will be able to effectively develop, and/or oversee contractors’ development of, software associated with systems modernization projects. IRS agreed with our recommendations for improving its software development capability, and is taking steps to do so. IRS said that it is committed to developing consistent procedures addressing requirements management, software quality assurance, software configuration management, and project planning and tracking. Regarding metrics, IRS said that it is developing a comprehensive measurement plan to link process outputs to external requirements, corporate goals, and recognized industry standards. IRS said also that it has “baselined” all legacy systems using an accepted Software Engineering Institute metric. We believe these steps, if implemented and institutionalized effectively, would provide IRS the disciplined approach necessary to improve its software development capability. Mature software development capabilities are key to quality, timely, and cost-effective TSM software development. IRS also stated its belief that most government agencies and private organizations are not far along in raising their software development maturity profiles. We have identified several government organizations that have adopted CMM and are moving toward higher CMM levels. For example, the Department of the Army’s Information Systems Software Development Center in Virginia and the Department of the Air Force’s Sacramento Air Logistics Center were both assessed by SEI authorized assessors as CMM level 3. The Air Force also has a deadline for all its software activities to reach CMM level 3 by 1998. The software development capabilities of other organizations notwithstanding, we believe that a complex and costly systems development project, such as TSM, at a minimum, would warrant a CCM level 2. IRS is not adequately performing and managing key TSM technical activities critical to the success of a large and complex systems modernization effort. In particular, IRS has not (1) defined and completed a TSM architecture, (2) established effective processes for configuration management, (3) defined the interfaces and standards needed to ensure that TSM components successfully integrate and interoperate, and (4) defined and completed TSM testing plans and established a testing facility. IRS recognizes that, for modernization to succeed, TSM’s technical activities must be better defined, performed, and managed. Until IRS improves these areas, it is at increased risk of developing systems that are unreliable, do not meet user needs, cannot work together effectively, and require significant and costly redesign and reprogramming to correct weaknesses. IRS has adopted a systems development methodology, known as Information Engineering, which is a formal, structured system development methodology widely used in the public and private sectors to provide a disciplined approach to information systems development. The principal deliverable of Information Engineering’s first stage, Information Strategic Planning, is an integrated systems architecture. An integrated systems architecture (1) guides and constrains system design and development by providing a balanced, top-down view of the system, which system designers need to build the system and (2) organizes system functionality and defines relationships among those functions. In establishing this guidance and functionality, it is key to define security and data architectures and standard application program interfaces. In July 1993, IRS published an initial version of its integrated system architecture. According to this document, the TSM integrated systems architecture will be completed as other modernization work progresses. This approach defeats the purpose of an integrated systems architecture, which is to guide a system’s development, not to merely document its development without formal guidance. Further, TSM security and data architectures and standard application program interfaces are incomplete and, thus, designers and developers do not have sufficient guidance to build individual TSM systems. Because TSM’s security architecture is incomplete, systems designers do not have sufficient guidance on how to incorporate restricted access to IRS systems and data. IRS has made progress in defining its security requirements, but it continues to develop and implement systems without first completing the necessary security architecture and security applications. In February 1994, IRS issued a risk assessment that identified potential security risks, determined their severity, and identified areas needing safeguards, and in October 1994, issued an information security policy. Since then, IRS has completed security documents relating to high-level security requirements, including mission, management, and technical security requirements; functional security requirements, which specify user security needs; a preliminary data sensitivity analysis, which is used to determine data sensitivity (e.g., sensitive but unclassified, etc.); and a draft information system target security architecture, which specifies TSM information security goals. In addition, an IRS infrastructure and engineering task group has defined a set of preliminary security applications program interfaces that will guide application developers in requesting systems security functions. IRS officials told us that once these interfaces have been completed and thoroughly tested, IRS will mandate their use. This progress notwithstanding, the TSM security architecture and security applications interfaces remain incomplete and unavailable to systems designers and developers. Without this crucial systems security guidance, IRS has no assurance that taxpayer data will be adequately protected. Key security guidance that has not yet been developed includes a disaster recovery and contingency plan, which would ensure that information systems can restore operations and data in the case of sabotage, natural disaster, or other operational disruption; a security concept of operations, which would define IRS plans for operating in TSM’s new security environment; a security test and evaluation plan, which would validate the operational effectiveness of system security controls; a security certification and accreditation plan, which would provide IRS managers and system security officers adequate assurance that the system will protect information as required by the security policy; a communications security plan, which would define how security controls will be implemented when sending and receiving sensitive information electronically between and among distributed TSM subsystems and external agencies that must provide tax-related information to IRS; and an identification and authentication plan, which would define processes to verify user identities when accessing sensitive tax data. Security has been a serious problem with IRS’s current systems. Our audits of IRS’s financial statements under the Chief Financial Officers Act (Public Law 101-576) have shown that IRS’s controls do not yet ensure that taxpayer data are adequately protected from unauthorized access, change, disclosure, or disaster. Specifically, IRS has not adequately (1) restricted access to taxpayer data to only those employees who need it, (2) monitored the activities of thousands of employees who were authorized to read and change taxpayer files, and (3) limited the use of computer programs to only those that have been authorized. We have reported that, as a result, IRS did not have reasonable assurance that the confidentiality and accuracy of these data were protected and that the data were not manipulated for purposes of personal gain. IRS’s own reviews have identified instances where IRS employees (1) manipulated taxpayer records to generate unauthorized refunds, (2) accessed taxpayer records to monitor the processing of fraudulent returns, and (3) browsed taxpayer accounts that were unrelated to their work, including those of friends, relatives, and neighbors. IRS is perpetuating its current data weaknesses by continuing to build TSM systems without the guidance afforded by a data architecture that reflects reengineered processes. An IRS analysis of its current systems identified the following data weaknesses: Updated data on one system are not immediately available to users of other systems. Master data files are updated once a week, and it can take up to 2 weeks for data in a taxpayer account to be changed. Inconsistent and incomplete data on different systems can affect fundamental computations and can result, for example, in inconsistent calculations of interest and penalties. Data are stored in unique formats on different systems and are accessed using various techniques. In 1994, to address data weaknesses, IRS initiated the Corporate Accounts Processing System project. IRS is developing this project in phases over 7 years, with each phase adding new TSM functionality. Through the Corporate Accounts Processing System, IRS expects to provide more efficient access to data, reduce data redundancy, and improve data integrity. Nonetheless, the success of the Corporate Accounts Processing System project depends on improving current business processes through reengineering. At the time of our review, however, the project was modeling IRS’s existing business processes because IRS had not completed its reengineering. To effectively correct existing data weaknesses that IRS identified and that the Corporate Accounts Processing System project is to address, IRS must first define how its business processes will be reengineered. Only then will IRS be in a strong position to build new systems based on a data architecture that reflects reengineered business processes. Standard application program interfaces are essential to guide systems development because they define how applications software can access and use standard functions and services (e.g., communications services). These interfaces provide many systems development benefits, including improved interoperability, consistent implementation, less complex applications, standardized software coding, and simplified maintenance. Realizing the benefits of providing standard application program interfaces for system development, IRS has established an interface task group and initiated an effort to define, code, test, and document standard application program interfaces for TSM. IRS has drafted an infrastructure services manual to provide an explanation of infrastructure services that will be available to systems developers. IRS also expects to prepare a more comprehensive and detailed manual describing application processing interfaces. However, many TSM standard application program interfaces are not yet defined, implemented, or documented. Nonetheless, IRS is continuing to build TSM projects. As a result, these projects are likely to require modification once standard application program interfaces are defined and required. Systems change throughout their life cycle to (1) improve systems designs and operations and facilitate maintenance, (2) reflect changing mission requirements, and (3) respond to changes in the budget and schedules. These changes must be controlled through configuration management to ensure that they are cost-effective and properly implemented, documented, and tested. Configuration management ensures that the integrity and stability of a system are preserved as it is designed, built, operated, and changed. Configuration management is also important for making engineering and trade-off decisions, maintaining up-to-date systems descriptions, and tracking every system component. In 1994, IRS established an Information System Configuration Control Board to manage and control all systems changes. However, the Board has focused on monitoring individual project costs and schedules and developing configuration management guidance. A process has not yet been established to manage systems changes. Further, IRS does not have a configuration management plan that precisely defines the processes to be implemented, how and when they will be implemented, and who will be responsible for performing specific configuration management functions. In 1992, IRS initiated an effort to design and develop both a comprehensive integration strategy and a programwide integration plan to help IRS successfully transition from its current environment to one that meets TSM-defined objectives and capabilities. A preliminary strategy described by IRS’s Executive for Systems Architecture was for (1) an integration approach that included a methodology to integrate current and future initiatives into the TSM systems architecture, (2) an associated problem detection and resolution process, and (3) the analysis processes (e.g., testing and quality assurance) required to ensure projects are being, and have been, successfully integrated. The preliminary strategy addressed both the integration of individual projects and the transition of all projects to an integrated processing environment. Since then, little has been done to complete a comprehensive integration strategy or develop an integration plan that defines implementation guidance and processes. In 1994, IRS planned to perform further work on integration management, but did not fund this effort in either fiscal year 1994 or fiscal year 1995. Until there is an effective integration process and a completed integration plan in place, IRS will have little assurance that its systems modernization components will operate effectively together. An organization performs system testing to detect system design and development errors and correct them before putting a system into operation. Inadequate testing increases the likelihood that errors will be undetected, reduces the extent to which a system can provide accurate and reliable processing services and information, and, because the discovery of errors is likely to be delayed, increases the cost of modifying the system. A testing plan ensures that sufficient testing is done during system development and prior to deployment. The plan defines, for example, what is to be accomplished during testing, who is to do the testing, where it is to be performed, and what constitutes success. IRS acknowledges the importance of testing in the development of TSM systems, but has not yet developed a complete and comprehensive testing plan for TSM. In addition, individual TSM system development projects are developing their own testing plans. IRS describes these individual testing plans as rudimentary and inadequate. As a result, IRS has no assurance that its individual systems will be thoroughly and consistently tested or that systems will perform correctly or effectively. Currently, IRS performs system development testing in an operational environment using taxpayer data at its service centers or computer centers. Because tax processing production work at these facilities has a higher priority than testing, the time, computer, and human resources applied to testing, as well as the resulting depth of testing, are limited. This limitation seriously affects testing quality and completeness. This testing environment also introduces the possibility that testing can, under unforeseen circumstances, affect and disrupt production. To help overcome this situation, IRS plans to establish an Integration Test and Control Facility to provide an environment that will more effectively support the testing and integration of legacy and TSM systems. By establishing this testing facility, IRS expects to (1) improve the quality of delivered software, (2) provide information resources needed for testing and integration, and (3) reduce risks in integrating and transitioning from current legacy systems to TSM. In September 1994, IRS developed a concept of operations for the integrated testing facility, which describes its functions and responsibilities. IRS has been working with a contractor to define the facility’s functions and responsibilities. IRS is also working with the General Services Administration to select a facility site. However, until IRS completes its testing plans, implements effective testing processes, and establishes its Integration Test and Control Facility, it has little assurance that systems will be adequately and effectively tested. To address IRS’s technical infrastructure weaknesses, we recommend that, before December 31, 1995, the IRS Commissioner complete an integrated systems architecture, including security, telecommunications, network management, and data management; institutionalize formal configuration management for all newly approved projects and upgrades and develop a plan to bring ongoing projects under formal configuration management; develop security concept of operations, disaster recovery, and contingency plans for the modernization vision and ensure that these requirements are addressed when developing information system projects; develop a testing and evaluation master plan for the modernization; establish an integration testing and control facility; and complete the modernization integration plan and ensure that projects are monitored for compliance with modernization architectures. Completion of these actions in 1995 is essential so that the Congress, in carrying out its oversight role and making TSM funding decisions, has assurance that the government’s TSM investment is adequately protected through effective management of the technical aspects of tax processing modernization. IRS agreed with our recommendations to improve its systems architectures, testing, and integration. IRS commented that it is identifying the necessary actions to ensure that defined systems development standards and architectures are enforced agencywide. IRS said also that it is planning for its 1996 IRS Information System Architecture to reflect a total system view; is reviewing existing documentation to determine how best to incorporate our security architecture recommendation; is in the process of improving its configuration management process by implementing change control, as well as developing guidance; has initiated a series of assessments for major TSM systems to review and baseline existing requirements for each deliverable, including documented interfaces; will merge integration testing, systems testing, and other testing-related personnel in one facility, and is planning to establish an interim test and control capability; and has developed a release engineering approach to transition from its current environment to one meeting TSM-defined objectives and capabilities. We believe that actions to improve TSM’s technical infrastructure, such as those IRS has outlined in its comments, are necessary prerequisites to adequately develop and implement new systems. In addition, while release engineering can facilitate the transition from IRS’s current environment to one meeting TSM-defined objectives and capabilities, to be successful, it must be closely coordinated with requirements and configuration management. Effective overall systems modernization management is important because TSM is not a one-time, turnkey replacement of all current subsystems; rather, it is a target system that will be reached by incrementally upgrading or replacing operational subsystems. Consequently, to successfully implement IRS’s systems modernization, an organizational structure must be in place to consistently manage and control all systems development efforts. This organizational structure would provide accountability and responsibility for all systems investments, including prioritizing new modernization systems and upgrades and maintaining all operational systems. However, below the Commissioner’s Office, the management authority and control needed to modernize tax processing has been fragmented. Until recently, IRS’s Modernization Executive was responsible for developing TSM information systems until they became operational. Under this executive, each TSM system was managed by a program control group that was tasked with reviewing the project, making milestone decisions, and mitigating project risks. In addition, the Chief Information Officer was responsible for developing non-TSM systems and for the operation of all IRS systems. This included the TSM systems that were developed by the Modernization Executive and that had been in operation for about 1 year. In addition to systems development and operations being managed and controlled by the Modernization Executive and the Chief Information Officer, several systems development projects were managed and controlled by IRS’s research and development division. For example, this division’s staff of 30 information specialists developed both Telefile and the Filing Fraud system, which are TSM systems. Neither the Modernization Executive nor the Chief Information Officer had decision-making responsibility for these systems or the authority to ensure compliance with IRS system development standards and practices. During our April 28, 1995, meeting with the IRS Commissioner, we recommended that she establish consolidated, organizationwide control over all information systems investments, including all new systems in research and development and operational systems being upgraded and replaced. In May 1995, the Modernization Executive was named Associate Commissioner and given responsibility to manage and control all system development efforts that had previously been the responsibility of the Modernization Executive and the Chief Information Officer. However, the research and development division still does not report to the Associate Commissioner. It is critical that the Associate Commissioner now establish organizationwide system modernization accountability and address the problems this report discusses. This entails ensuring strategic planning documents are complete and consistent; developing a comprehensive plan and schedule for linking reengineering efforts to systems development projects; exercising consolidated control over all information systems investments, including all new systems in research and development and operational systems being upgraded and replaced; and ensuring that defined systems development standards and architectures are enforced. To fully strengthen systems development accountability and responsibility, we recommend that the IRS Commissioner give the Associate Commissioner management and control responsibility for all systems development activities, including those of IRS’s research and development division. In commenting on a draft of this report, IRS reiterated that the Associate Commissioner is responsible for all aspects of modernization program planning and management, budget formulation and execution, and information systems development and management. Further, IRS said that it was considering whether the Associate Commissioner’s systems development responsibilities are to include those of the research and development division. We strongly urge IRS to also place with the Associate Commissioner accountability and responsibility for the research and development division’s systems development activities. By doing so, IRS will help to ensure that systems development efforts are consistently managed and controlled organizationwide.
GAO reviewed the effectiveness of the Internal Revenue Service's (IRS) efforts to modernize tax processing, focusing on: (1) IRS business and technical practices in the areas of electronic forms, strategic information management, software development, technical infrastructures, and organizational controls; and (2) opportunities to improve IRS information systems management and software development capabilities. GAO found that: (1) despite IRS efforts to improve its tax processing, pervasive management and technical weaknesses still remain that could impede its modernization efforts; (2) IRS does not have a comprehensive business strategy to reduce paper submissions; (3) IRS has not yet fully developed the requisite software and technical infrastructures to successfully implement its modernization efforts; (4) other tax system modernization (TSM) weaknesses include IRS failure to fully implement strategic information management practices, an immature and weak software development capability, and incomplete systems architectures and integration and system planning; (5) IRS does not manage TSM as an investment, systems development is not driven by reengineering efforts, and IRS staff do not have the necessary skills to meet future IRS needs; and (6) IRS has not assigned responsibility, authority, and accountability for managing and controlling systems modernization to one individual or office.
The following section discusses Executive Order 12898, EPA’s framework for integrating environmental justice into the agency’s missions, key environmental justice stakeholders, and leading practices in strategic planning. On February 11, 1994, the President signed Executive Order 12898 to address environmental justice concerns in minority and low-income populations. The executive order requires federal agencies to, among other things:  make achieving environmental justice part of their missions by identifying and addressing, as appropriate, disproportionately high and adverse human health or environmental effects of programs, policies, and activities on minority and low-income populations;  develop an agencywide environmental justice strategy that should (1) promote the enforcement of health and environmental laws in low- income and minority population areas; (2) ensure greater public participation in agency decision making; (3) improve research and data collection associated with environmental justice issues; and (4) identify minority and low-income patterns of consumption of natural resources; submit their environmental justice strategies to the Federal Interagency Working Group on Environmental Justice convened by the EPA Administrator, which is then to report governmentwide progress to the Executive Office of the President; and  undertake certain activities, such as ensuring that documents are concise, understandable, and readily accessible and translating documents, where appropriate, to support public participation. Executive Order 12898 calls on EPA and other federal agencies to address disproportionately high human health and environmental impacts on minority populations and low-income populations. The Council on Environmental Quality (CEQ), in the Executive Office of the President, oversees the federal government’s compliance with the executive order, as well as with the National Environmental Policy Act (NEPA). In enacting NEPA in 1970, Congress declared that “it is the continuing responsibility of the Federal Government to use all practicable means, consistent with other essential considerations of national policy, to improve and coordinate Federal plans, functions, programs, and resources” to, among other things, “assure for all Americans safe, healthful, productive, and aesthetically and culturally pleasing surroundings.” Further, Congress mandated that before federal agencies undertake a major federal action significantly affecting the environment, they must consider the environmental impact of such actions on the quality of the human environment, such as cultural, economic, social, or health effects including those on populations and areas with environmental justice concerns. To accomplish this mandate, NEPA regulations require, among other things, that federal agencies evaluate the likely environmental effects of proposed projects using an environmental assessment or, if the projects would likely significantly affect the environment, a more detailed environmental impact statement evaluating the proposed project and alternatives. In its 1997 NEPA guidance, CEQ suggested definitions for key environmental justice terms to help federal agencies identify and address environmental justice concerns in fulfilling their NEPA responsibilities. For example, CEQ’s guidance proposed that agencies identify low-income populations by using the annual statistical poverty thresholds from the Bureau of the Census Current Population Reports. Further, the CEQ guidance identified two definitions for minority population: (1) the minority population of the affected area exceeds 50 percent; or (2) the minority population percentage of the affected area is meaningfully greater than the minority population percentage in the general U. S. population. Moreover, in discussing whether human health or environmental effects are disproportionately high, CEQ’s guidance suggests that agencies consider three factors: (1) whether effects of proposed actions are significant or above generally accepted norms; (2) whether effects of proposed actions on minority, low-income, and tribal population are significant and appreciably exceed risk to the general population; and (3) whether minority, low-income, or tribal populations are affected by the cumulative impacts of pro posed actions. EPA’s framework for integrating environmental justice into the agency’s missions includes four major plans: (1) EPA’s Fiscal Year 2011-2015 Strategic Plan, (2) Plan EJ 2014, (3) Plan EJ 2014’s Implementation Plans, and (4) Plan EJ 2014 Outreach and Communications Plan. EPA’s Fiscal Year 2011-2015 Strategic Plan. EPA’s strategic plan provides a blueprint for how the agency expects to accomplish its priorities, including environmental justice. In addition to outlining strategic goals for advancing EPA’s mission to protect the environment and human health, it also outlines cross-cutting fundamental strategies that lay out specifically how EPA is to conduct its work over the next 5 years. These strategies include (1) expanding the conversation on environmentalism, which will involve engaging and empowering communities and partners— including those who have been historically under-represented—to support and advance environmental protection and human health, and (2) working for environmental justice and children’s health, which will involve reducing and preventing harmful exposures and health risks to children and underserved, disproportionately impacted low-income, minority, and tribal communities. EPA officials said that they expect that both strategies will influence the work of every program and regional office throughout the agency, especially with respect to environmental justice. Plan EJ 2014. Named in recognition of the 20th anniversary of Executive Order 12898, Plan EJ 2014 is EPA’s overarching strategy for implementing environmental justice in the agency’s programs, policies, and activities. Plan EJ 2014 is a 4-year plan designed to help EPA develop stronger relationships with communities and increase the agency’s efforts to improve environmental conditions and public health in overburdened communities. According to EPA officials, the activities outlined in the plan are aligned with and support EPA’s commitments in the 2011-2015 strategic plan. Plan EJ 2014 defines three elements that are to guide EPA’s actions to advance environmental justice across the agency and the federal government: (1) cross-agency focus areas, (2) tools development efforts, and (3) program initiatives. The cross-agency focus areas are meant to address issues or functions that require work by all programs or agencies and serve to promote environmental justice across EPA and the federal government. The five cross-agency focus areas are  Rulemaking—providing guidance and support for all agency rule writers and decision makers so they can better include environmental justice concerns in rules being written throughout the agency.  Permitting—initially emphasizing EPA-issued permits that provide opportunities for helping overburdened populations; in the future, focusing on permits that would enable EPA to address the cumulative impacts of pollution on these populations.  Compliance and enforcement—targeting pollution problems that tend to affect disadvantaged communities, and providing these communities with opportunities for input into the remedies sought in enforcement actions.  Community-based action—engaging with overburdened communities and providing grants and technical assistance designed to help them address environmental problems.  Administrationwide action on environmental justice—establishing partnerships and initiatives with other federal agencies to support holistic approaches to addressing environmental, social, and economic burdens of affected communities. EPA’s four tools development efforts focus on developing the scientific, legal, and resource areas, as well as data and information areas that support environmental justice analysis, community work, and communications and stakeholder engagement. For example, in March 2010, EPA held a symposium on the science of disproportionate impact analysis. In June 2010, the agency followed with an environmental justice analysis technical workshop. According to EPA officials, the agency is also working to develop a computer-based screening tool, known as EJ SCREEN, to assist with identifying the location of communities with potential environmental justice concerns. The program initiatives focus on specific EPA programs, mainly the national programs. Plan EJ 2014 calls on EPA national program managers to identify relevant programmatic items that could benefit communities with environmental justice concerns. For example, according to EPA program documents, the Community Engagement Initiative in EPA’s Office of Solid Waste and Emergency Response (OSWER) could benefit communities with environmental justice concerns. This initiative focuses on identifying steps EPA can take to encourage communities and stakeholders to participate in developing and implementing hazardous materials policy and in evaluating the effectiveness of the agency’s actions. The initiative also focuses on identifying ways to institutionalize policy changes that aim to improve community engagement and environmental justice in the long-term, day-to- day operation of OSWER program activities. In addition, according to EPA program documents, the U.S. Mexico Border Program, managed in the Office of International and Tribal Affairs, seeks to address environmental justice issues along the border shared by the two countries. This program is a cooperative effort designed to address pollutants that enter shared waterways, affecting the health of border residents as well as degrading the environment in both nations. Plan EJ 2014 Implementation Plans. As guides for program and regional offices, EPA has developed implementation plans for every cross-agency focus area and developmental tool in Plan EJ 2014. Each implementation plan establishes unique goals and lays out strategies designed to meet those goals, and identifies national program offices and regional offices accountable for meeting plan goals within specified time frames. For example, the permitting plan outlines goals for providing disadvantaged communities with access to the agency’s permitting process, and ensuring that permits address environmental justice issues to the greatest extent practicable. Its strategies call for EPA to develop the necessary tools and recommendations to enhance communities’ abilities to participate in permitting decisions and to enable agency staff to incorporate environmental justice into permits. According to the plan, EPA will decide on how to best transmit and implement the permitting tools and recommendations by January 2012. Plan EJ 2014 Outreach and Communications Plan. In June 2011, EPA provided GAO a draft of its EJ 2014 Outreach and Communications Plan. The plan reiterates EPA’s commitment to continuing many of its outreach and communication activities, such as environmental justice listening sessions, as the agency moves forward. The plan identifies four principal goals for conducting outreach and communicating both with EPA staff and external stakeholders, such as states, on Plan EJ 2014. More specifically, the goals of the plan are to (1) inform and share the purpose, vision, priorities, and desired or resulting outcomes for Plan EJ 2014; (2) obtain a broad range of stakeholder views in the development, implementation, and ongoing enhancement/revision of Plan EJ 2014; (3) communicate Plan EJ 2014’s vision, activities, results, and subsequent revisions to stakeholders, partners, and audiences in a consistent and dynamic way; and (4) facilitate the development of partnerships with and among EPA’s stakeholders to achieve Plan 2014’s goals and translate them into lasting results. A number of external entities have a significant role in helping EPA integrate environmental justice into its programs, policies, and activities. Key stakeholders include the National Environmental Justice Advisory Council (NEJAC), the Federal Interagency Working Group on Environmental Justice (IWG), state agencies, and community groups.  NEJAC was established by EPA charter pursuant to the Federal Advisory Committee Act in 1993. NEJAC provides independent advice and recommendations to the EPA Administrator on a broad array of strategic, scientific, technological, regulatory and economic issues related to environmental justice. The council is comprised of a wide spectrum of stakeholders, including community-based groups, business and industry, state and local governments, tribal governments and indigenous organizations, and non-governmental and environmental groups. The council holds public meetings and teleconferences, providing a forum focusing on human health and environmental conditions in all communities, including minority and low-income populations. IWG was established under Executive Order 12898 in 1994. Among other things, the IWG provides guidance to federal agencies on identifying disproportionately high adverse effects on minority and low-income populations, assists in coordinating research and data collection conducted by federal agencies, and holds quarterly public meetings to share best practices for integrating and addressing environmental justice as well as identifying opportunities to enhance coordination and collaboration among federal agencies. The IWG is comprised of 15 federal agencies and several White House offices.  EPA relies on states to help implement its programs under several key environmental statutes, such as the Clean Air Act and the Resource Conservation and Recovery Act (RCRA). Under these laws, generally once a state demonstrates and is approved by EPA as meeting the relevant criteria, the state accepts key day-to-day responsibilities, such as permitting and monitoring, and in some programs primary enforcement. As such, states are key stakeholders in EPA’s environmental justice efforts, because the states will be largely responsible for carrying out many of the environmental justice activities identified by EPA. For example, under the Clean Air Act, EPA has established national ambient air quality standards for certain pollutants considered harmful to public health and the environment. States are responsible for developing and implementing plans, known as State Implementation Plans, to achieve and maintain these standards. In carrying out this duty, states set emissions limitations for individual sources of air pollution which they incorporate into enforceable permits. Similarly, states with hazardous waste programs determined to be equivalent to the federal program and authorized under RCRA are responsible for carrying out the program including such activities as issuing and enforcing permits for the storage, treatment, and disposal of hazardous waste. Finally, EPA also works with states to implement various environmental grant and loan programs, such as the Clean Water and Drinking Water State Revolving Funds. Thus, states have the opportunity to consider environmental justice in developing their plans and programs, as well as in issuing permits and making grants.  EPA has worked to include community groups as important stakeholders in the agency’s environmental justice decision making. According to Plan EJ 2014, EPA envisions a continuous dialogue with communities and other stakeholders regarding efforts to integrate environmental justice into agency policies and programs. For example, EPA’s National Enforcement Air Toxics Initiative and Office of Brownfields and Land Revitalization, among others, reflect a focus on issues that have been conveyed to EPA from disadvantaged communities. Further, EPA has developed various programs and tools, such as funding mechanisms, training, technical assistance, and information and analytical resources, to help communities understand and address their environmental problems. In 1993, Congress enacted GPRA to improve the efficiency and accountability of federal programs, among other purposes, and established a system for agencies to set goals for program performance and to measure results. GPRA requires, among other things, that federal agencies develop long-term strategic plans. The Office of Management and Budget (OMB) provides guidance to federal executive branch agencies on how to prepare their strategic plans in accordance with GPRA requirements. Federal departments and agencies must comply with GPRA requirements and are to follow associated OMB guidance in developing their department or agencywide strategic plans. We have reported that these requirements also can serve as leading practices for strategic planning at lower levels within federal agencies, such as planning for individual divisions, programs or initiatives. In addition, we have reported in the past on federal agencies’ strategic planning efforts and have identified additional useful practices to enhance agencies’ strategic plans. We have reported in the past that, taken together, the strategic planning elements established under GPRA and associated OMB guidance, and practices identified by GAO provide a framework of leading practices in federal strategic planning. See table 1 for selected leading practices in federal strategic planning. EPA is implementing an agencywide approach to integrating environmental justice efforts, with its national program and regional offices taking primary roles. Stakeholders are also expected to play a major role in helping EPA integrate environmental justice into its programs and policies. EPA’s national program and regional offices are primarily responsible for integrating environmental justice considerations into the agency’s policies, programs, and activities. Under Plan EJ 2014, each national program office, along with selected regional offices, will have a key leadership role in helping to integrate environmental justice into the five cross-agency focus areas: rulemaking, permitting, enforcement, community-based actions, and administrationwide actions. Among other things, these offices will be responsible for implementing assigned Plan EJ 2014 cross-agency elements, engaging appropriate agency offices and regions, identifying and securing resources to ensure implementation, and tracking and reporting on progress in these areas. For example, EPA’s Office of Enforcement and Compliance Assurance (OECA), which serves as the national program manager for environmental justice and provides general oversight of all agency environmental justice activities, and its region 5 office––comprising states in the upper midwest––will share responsibility for ensuring that environmental justice concerns are incorporated into EPA’s enforcement and compliance programs. According to Plan EJ 2014, the goal over the next 3 years is to fully integrate environmental justice considerations into the planning and implementation of OECA’s program strategies and its development of remedies in enforcement actions. To achieve these goals, OECA is engaging in a number of activities, such as considering environmental justice in the selection of its National Enforcement Initiatives––high priority national environmental and compliance problems that are addressed through concentrated, nationwide enforcement efforts––for fiscal years 2011 through 2013, issuing internal guidance that calls for analysis and consideration of environmental justice in EPA’s compliance and enforcement program, and increasing efforts to address environmental justice concerns by seeking appropriate remedies in enforcement actions to benefit over-burdened communities. Similarly, EPA’s Offices of Air and Radiation (OAR) and General Counsel (OGC), and EPA region 1––comprising the northeastern United States–– are designated as co-leads for carrying out the permitting implementation plan. Some of the activities OAR and OGC are undertaking in the permits focus area include: developing a plan to engage stakeholders throughout the process, soliciting input from both internal and external stakeholders about the types of tools and recommendations that have been the most effective in advancing environmental justice, and identifying opportunities in EPA’s ongoing permit activities to test the most viable tools and recommendations. Figure 1 shows the EPA offices responsible for implementing Plan EJ 2014. In addition to the program and regional offices, several other offices in EPA will have leadership roles in developing environmental justice tools in the areas of law, information, science, and resources to help better advance the agency’s environmental justice efforts. For example, EPA’s Office of Policy and Office of Environmental Information will be co-leads in the development of information tools—most notably, EJ SCREEN, intended to be a nationally-consistent screening tool for environmental justice. According to the implementation plan for information, EJ SCREEN will not only help improve environmental justice analysis and decision- making, but will also help communities better understand how EPA screens for potential environmental justice concerns. Some of the activities involved in developing EJ SCREEN include creating a working prototype of the tool, obtaining peer review and public comments on the prototype, and incorporating the EJ SCREEN into EPA’s common mapping software. EPA expects to make EJ SCREEN available to its national program and regional offices within the next 3 years. Other entities also have important roles in helping to integrate environmental justice in the daily activities of EPA, including the agency’s Office of Environmental Justice (OEJ) and the Executive Management Council’s Environmental Justice Committee. OEJ, which resides in OECA, provides support for the EPA Administrator, OECA, and other national program and regional offices on all environmental justice activities. The Executive Management Council’s Environmental Justice Committee, which comprises deputy assistant administrators and deputy regional assistant administrators, also plays an important leadership role in implementing Plan EJ 2014 by, among other things, providing a forum for discussing critical policy issues and helping to establish workgroups or subcommittees to address cross-agency efforts. EPA expects stakeholders to play a major role in helping to integrate environmental justice considerations into EPA’s program, policies, and activities. As a result, EPA is renewing its commitment to work with key environmental justice stakeholders and exploring new approaches for obtaining stakeholder input. EPA has renewed its efforts to work with key environmental justice stakeholders to advance the agency’s environmental justice considerations. For example, EPA has renewed its communications with the IWG. In September 2010, EPA and the White House Council on Environmental Quality reconvened the IWG for the first time in over a decade. At this meeting, the IWG members agreed to hold monthly meetings, assign senior officials from each agency to coordinate environmental justice activities, organize regional listening sessions in 2011, hold follow-up IWG Principals Meetings in September 2011 and plan a White House forum on environmental justice for environmental justice leaders and stakeholders. In addition, each agency was tasked with developing or updating its environmental justice strategy by September 2011. Moving forward, EPA documents indicate that the agency expects that the IWG will help integrate environmental justice by, among other things, identifying opportunities for federal programs to improve the environment and public health, create sustainable economies, and address other environmental justice concerns for disadvantaged communities. According to EPA officials, EPA plans to work more closely with NEJAC in its efforts to integrate environmental justice into the mainstream of EPA. In her remarks in July 2009 to NEJAC, the EPA Administrator noted that NEJAC’s advice and recommendations will be especially pertinent to the agency as it seeks to place greater emphasis on the implementation and integration of environmental justice considerations. NEJAC recently issued reports with recommendations to the EPA Administrator on a variety of matters associated with environmental justice. In 2009, NEJAC recommended how EPA––in partnership with federal, state, tribal, local governmental agencies, and other stakeholders––can most effectively promote strategies to identify, mitigate, or prevent disadvantaged communities from being disproportionately burdened by air pollution caused by transporting goods. In 2010, NEJAC recommended the best methods to use to communicate with communities on the monitoring of toxic air in schools. Most recently, in May 2011, NEJAC made recommendations on the appropriateness of the cross- agency focus areas EPA included in its Plan EJ 2014 ways that EPA can strengthen specific actions within the five cross-agency focus areas, and how EPA can prioritize the five cross-agency focus areas. EPA has also renewed its efforts to work with states to help integrate environmental justice efforts. In Plan EJ 2014, EPA observes that for the agency to achieve its environmental justice goals, such as incorporating environmental justice considerations into the permitting process, EPA will have to work more closely with states and provide them with better guidance. EPA has subsequently provided several forums to obtain state input on Plan EJ 2014. In addition, the agency has highlighted the need for state input in over half of the individual implementation plans associated with Plan EJ 2014. In an effort to ensure that stakeholders’ views play a major role in helping to shape EPA’s environmental justice efforts, EPA has stressed and, in some cases, begun providing for stakeholder involvement in several key environmental justice documents, including EPA’s FY 2011-2015 Strategic Plan and Plan EJ 2014. For example, according to its strategic plan, EPA will address the access barriers faced by historically under- represented groups to help improve the participation of these groups in the decision making process. The plan also calls for the use of traditional and new media to help inform and educate the public about EPA’s activities and to provide opportunities for community feedback. The need for stakeholder involvement is similarly expressed in EPA’s Plan EJ 2014 draft Outreach and Communications Plan. For instance, the agency’s outreach and communications plan has a specific goal of obtaining a broad range of stakeholder views on Plan EJ 2014. Accordingly, EPA has developed a strategy to reach out to and look for opportunities to engage various stakeholders, including community members, businesses, states, local representatives, native Alaskan and Hawaiians, and tribes. Moreover, according to its draft outreach and communications plan, EPA expects to schedule meetings and roundtables with stakeholder groups as well as look for opportunities to participate in national conferences and meetings held by other organizations to give presentations, seek input, and engage with others about Plan EJ 2014. The draft outreach and communications plan also specifies that a community engagement and stakeholder outreach plan is to be developed for each of the nine Plan EJ 2014 implementation plans. EPA has recently begun employing several new approaches to enhance stakeholder input in its environmental justice efforts, including conducting quarterly environmental justice outreach teleconferences as well as listening sessions on Plan EJ 2014. According to EPA documents, in July 2010, the agency began hosting quarterly environmental justice outreach teleconferences. The teleconferences provide an opportunity for those interested in environmental issues to call in and receive information on EPA’s environmental justice activities. The teleconferences also allow stakeholders an opportunity to provide input on environmental justice efforts. According to EPA officials, as the work on Plan EJ 2014 progresses, the quarterly teleconferences will help to better inform the public about the agency’s environmental justice activities, as well as provide an opportunity for members of disadvantaged communities to call in and get information on federal efforts that could benefit them, such as grant opportunities. In addition, in June 2011, EPA began conducting a series of listening sessions on the draft Plan EJ 2014 Considering Environmental Justice in Permitting implementation plan. The listening sessions are intended to provide an opportunity for EPA to listen to stakeholders’ ideas, concerns, and recommendations regarding EPA’s environmental justice permitting initiative. According to EPA documents, EPA held six listening sessions in June 2011. The listening sessions were organized by stakeholder group, that is, there were separate listening sessions with state and local governments; business and industry; environmental groups; tribes; environmental justice communities and community groups; and Spanish- speaking stakeholders. In developing a framework for incorporating environmental justice considerations into its policies, programs, and activities, EPA generally followed or partially followed the six leading federal strategic planning practices that we reviewed (see table 2). EPA generally followed three leading federal strategic planning practices: Define mission and goals. In its Plan EJ 2014, EPA established a mission to integrate environmental justice into the agency’s programs and policies through its cross-agency focus areas, tools development efforts, and program initiatives. The three key goals defined in Plan EJ 2014 generally focus on the outcome-oriented results that EPA aims to achieve in communities. Moreover, the implementation plans associated with Plan EJ 2014 contain goals for each of the nine cross-agency focus areas and tools development efforts. The implementation plans generally align with its overarching environmental justice goals. For example, in its implementation plan for the cross-agency focus area on supporting community-based action programs, EPA defined its goal as strengthening community-based programs to engage overburdened communities and building partnerships that promote healthy, sustainable, and green communities. Ensure leadership involvement and accountability. As previously discussed, EPA’s senior leadership has taken a number of steps to demonstrate its commitment to involving its leaders in advancing environmental justice in the agency, including giving the senior administrators of EPA program and regional offices lead responsibility for implementing Plan EJ 2014’s cross-agency focus areas. EPA has also developed measures to ensure accountability for achieving its environmental justice mission. For example, EPA has required its national program offices to incorporate environmental justice priorities in their fiscal year 2012 National Program Manager Guidance documents. The guidance documents are annual plans that set forth each national program office’s priorities and key actions for the upcoming year that support EPA’s strategic plan and annual budget. The guidance also provides annual direction to regional offices on how to work with states on national priorities and serves as a mechanism to hold the regional offices accountable for specific levels of performance. For example, we reviewed the fiscal year 2012 National Program Manager Guidance from OAR and found that it included plans to consult with communities, develop programs and policies that reflect environmental justice concerns, and work with EPA regional offices to help educate and raise states’ awareness of opportunities to address environmental justice issues. In addition, EPA officials told us that fiscal year 2011 is the first year that the agency aligned its performance-based pay system to hold all senior executives accountable for advancing its environmental justice goals and mission. Specifically, EPA directed its senior executives to make individual commitments in their fiscal year 2011 annual performance plans for advancing the agency’s environmental justice agenda. Coordinate with other federal agencies. As previously discussed, EPA has made establishing partnerships with federal agencies a part of its overarching environmental justice goals in Plan EJ 2014 and has made fostering administrationwide action on environmental justice a cross- agency focus area in the plan. Moreover, in addition to reconvening the IWG, EPA has a number of other interagency initiatives under way that support its Plan EJ 2014. For example, in June 2009, EPA jointly established the Partnership for Sustainable Communities with the Departments of Housing and Urban Development and Transportation to support environmental justice and equitable development by coordinating federal actions on housing, transportation, and environmental protection. According to information on EPA’s Web site, the three agencies worked together to distribute nearly $2 billion in grants in 2009 to recipients that included EPA Environmental Justice Showcase Communities to support vital transportation infrastructure, equitable comprehensive planning, and brownfields cleanup and reuse. As of June 2011, EPA partially followed three of the leading practices in federal strategic planning that we reviewed. Without additional progress on these practices, EPA cannot assure itself, its stakeholders, and the public that it has established a framework to effectively guide and assess efforts to accomplish its environmental justice goals. Specifically, EPA has not yet fully:  established a clear strategy for how it will define key environmental justice terms or identified the resources it may need to carry out its environmental justice implementation plans;  articulated clearly states’ roles in ongoing planning and environmental justice integration efforts; and  developed performance measures for eight of its nine implementation plans to track agency progress on its environmental justice goals. EPA has taken actions to address many of the management challenges regarding the agency’s efforts to integrate environmental justice into its programs and policies. However, the agency has not yet developed a strategy for how it will address one principal, long-standing challenge: the agency’s lack of standard and consistent definitions for key environmental justice terms. In addition, EPA has yet to identify the budgetary and human resources that may be needed to implement is agencywide environmental justice plans. We have reported in the past that a primary purpose of federal strategic planning is to improve the management of federal agencies. In doing so, it is particularly important for agencies to develop strategies that address management challenges threatening their ability to meet long-term strategic goals. In addition, strategies should include a description of the resources needed to meet established goals. Management challenges. EPA officials told us that they have taken a number of actions to address the management challenges identified by the EPA IG. For example, to address the EPA IG’s finding that the agency lacked a clear mission for its Office of Environmental Justice, EPA has clarified and communicated the office’s role through agency guidance and memoranda. Additionally, EPA has addressed what the EPA IG considered a lack of a clear vision for integrating environmental justice by outlining the agency’s approach to environmental justice in its agencywide fiscal year 2011-2015 strategic plan under its cross-cutting strategy for environmental justice and children’s health. Further, EPA has addressed the lack of a comprehensive strategic plan to help guide its agencywide efforts to integrate environmental justice by establishing its Plan EJ 2014 and associated implementation plans. However, EPA has yet to establish a strategy for how it will provide standard and consistent definitions for key environmental justice terms, such as “minority” and “low-income communities,” as called for by the EPA IG in 2004. In its 2004 report, the EPA IG found that, because the agency lacked definitions for these key terms from Executive Order 12898, its regional offices had used different approaches to identify potential areas of environmental justice concern. The EPA IG concluded that EPA had inconsistently implemented Executive Order 12898 and recommended that EPA provide its regions and program offices a standard and consistent definition for these terms, with instructions, through guidance or policy, on how the agency will implement and operationalize environmental justice into its daily activities. More recently, the EPA IG found that a lack of clear definitions continues to present a challenge to the agency. Specifically, in April 2011, the EPA IG reported that EPA could not execute efforts to track how it has distributed funds from the American Reinvestment and Recovery Act to low-income and minority communities because the agency did not have definitions for these particular communities. EPA officials we interviewed told us that they have not developed agencywide definitions for key environmental justice terms, such as low- income and minority, because doing so could affect the agency’s ability to accurately identify communities with potential environmental justice concerns. For example, the EPA officials stated that strict definitions for such terms would reduce their flexibility in considering other factors, which may be necessary to more accurately identify a community with environmental justice concerns. In addition, the EPA officials informed us that there are some communities across the country that may not meet a single definition for low-income or minority, but may nevertheless have environmental justice concerns. According to the EPA officials, these communities do not want EPA to establish any strict definitions for environmental justice terms for fear that as a result they might be excluded from EPA’s decision-making process. EPA officials informed us that they are beginning to define some environmental justice terms with respect to the agency’s EJ SCREEN tool. However, these definitions will have limited use. More specifically, EPA officials told us that the EJ SCREEN tool will include definitions for “low- income” and “minority,” but these definitions are not intended to establish a standard for all of EPA’s programs, policies, and activities. Rather, the officials told us that the agency intends EJ SCREEN to have a limited role across the agency and will be used only for baseline environmental justice screening. Without a clear strategy for how the agency will define key environmental justice terms, EPA may not be able to overcome the challenges it has faced in establishing a consistent and transparent approach for identifying potential communities with environmental justice concerns. Moreover, without establishing consistent definitions, the agency may not be able to demonstrate that its environmental justice efforts are addressing minority and low-income populations that are experiencing disproportionate environmental health impacts. Resource Needs. EPA has also yet to identify the budgetary and human resources that may be needed to implement its agencywide environmental justice plans. Specifically, none of the nine Plan EJ 2014 implementation plans described the resources that are needed to carry out the strategies and activities detailed in the plans. According to EPA’s plans, the agency intends to undertake changes in operations that will impact the workload as well as roles and responsibilities of staff across the agency. These changes will include, among other things, additional processes for engaging communities during rulemaking development and additional analyses for conducting economic and risk assessments. This may involve allocating staff and funds differently to address skill gaps and workload changes. As we have reported in the past, effective strategies should describe the resources needed to accomplish established goals. EPA officials told us that their most recent review of environmental justice-related resources was completed in fiscal year 2009 in preparation for the proposed fiscal year 2010 President’s budget. The review, which focused on the staffing resources allocated to the Office of Environmental Justice and to the regional offices, determined that each regional office needed additional full-time equivalents (FTE) for staff positions to promote the integration of environmental justice within regional work. EPA officials told us that as a result of the review, the agency increased the total agency staffing allocation of the Office of Environmental Justice from 21 to 33 FTEs. Nonetheless, EPA completed the review before it had developed its draft Plan EJ 2014 and did not consider the staffing needs for incorporating environmental justice in decision making across all EPA program and regional offices. Senior EPA officials told us that they did not believe that identifying the resources associated with the activities detailed in the Plan EJ 2014 implementation plans was practical or necessary because they expect all EPA staff to work on environmental justice. Moreover, they said that they believe the new environmental justice efforts described in the implementation plans would only result in a negligible increase in resource needs because enhancing current program activities with environmental justice consideration or criteria should result in the same people doing many of the same things. For example, officials stated that they anticipate that including environmental justice considerations in economic and risk analyses conducted in support of regulatory decisions would involve adding several variables to otherwise resource intensive studies and thus would not substantially alter the resources required to complete these analyses. Officials also stated that they believe a resource assessment would itself be resource-intensive and thus would only take resources away from more important program needs without a clear benefit to managers. Without a clear understanding of the resources needed to integrate environmental justice considerations throughout the agency under its current plans, EPA cannot ensure that its current staffing and funding resources are sufficient to meet its environmental justice goals. Furthermore, EPA cannot ensure that it has the information needed to successfully adapt to changes in workload as a result of new environmental justice initiatives or areas of focus as well as potential changes in funding levels for the agency. EPA’s IG has recently identified EPA’s policies and procedures for determining workforce levels as an area of significant internal control weakness. Specifically, in December 2010, the EPA IG reported that EPA cannot demonstrate that it has the sufficient resources to accomplish its mission and cannot provide any assurance that its workforce levels are adequate to meet the workload of the agency. As mentioned earlier, EPA has taken a number of steps to involve some key stakeholders in helping the agency define its environmental justice mission, goals, and strategies. However, the role that states will have in ongoing environmental justice planning and implementation efforts is unclear. EPA relies heavily on many states for activities that generally include issuing permits and monitoring and enforcing compliance with federal environmental laws; therefore, states will play a significant role in implementing potential new approaches for addressing environmental justice. We have reported in the past that organizations that are successful in strategic planning understand that stakeholders will play a key role in determining whether their programs succeed or fail. Thus, involving stakeholders in strategic planning helps ensure that their mission, goals, and strategies are targeted at the highest priorities. EPA has involved some key stakeholders to help define its environmental justice mission, goals, and strategies. For example, in July 2010, EPA requested that NEJAC provide the agency with recommendations and advice to help the agency identify and prioritize the cross-agency focus areas in its Plan EJ 2014 and to help develop its strategy for the focus area on considering environmental justice in permitting. EPA also obtained recommendations from academic researchers and environmental justice organizations during a symposium held in March 2010, which formed the basis for the goals and strategies identified in its Plan EJ 2014 Science Tools Development implementation plan. EPA officials assert that the agency has similarly involved states early on in the initial stages of Plan EJ 2014 and its associated implementation plans and that these planning documents reflect states’ input and concerns, particularly with respect to the cross-agency focus area on permitting. However, based on our review of these documents and interviews with EPA and state association officials, it is unclear how states will specifically be involved in the agency’s ongoing environmental justice planning efforts as well as its implementation of these plans. Five Plan EJ 2014 implementation plans identify states as key stakeholders, but provide limited detail on how states will be involved in ongoing planning regarding these efforts and in the actual implementation of the plans. For example, while the implementation plan for the cross-agency focus area on permitting generally indicates that state input will be obtained, the plan does not specify how states will be integrally involved in the planning for this focus area or the level of involvement expected from states in helping to implement the plan. Without articulating clearly in its plans how states will be involved in ongoing environmental justice planning efforts and what part states will play in helping EPA implement these plans, EPA cannot ensure that states are meaningfully involved in the ongoing planning and implementation of EPA’s environmental justice integration efforts. EPA officials told us that they recognized that the implementation plans did not provide much detail on how states will be involved. However, they said that the agency planned to work more closely with states to obtain their views in finalizing the implementation plans. Towards this end, EPA took some additional steps to obtain states views after the release of its draft implementation plans. For example, EPA held a teleconference listening session with officials from state and local governments in June 2011 to solicit states’ feedback on the topic of considering environmental justice in permitting. Notwithstanding these efforts, without more directly involving states in ongoing environmental justice planning and clearly articulating their role and responsibilities in implementing environmental justice plans, EPA’s efforts to integrate environmental justice may be hampered, given the significant role that states have in administering some federal environmental programs. GAO and EPA’s IG have reported in the past on the challenges EPA has faced in achieving effective oversight of states across a range of its delegated programs. Most recently, the IG identified EPA’s oversight of its delegation to states as a key management challenge in fiscal year 2010. The IG noted that although EPA has taken a number of steps in recent years to improve its oversight of states, there remain a number of factors and practices that reduce the effectiveness of the agency’s oversight, including differences between state and federal policies, interpretations, and priorities. EPA has developed performance measures for one of its nine Plan EJ 2014 implementation plans to track progress on its environmental justice goals: its Resources Tools Development implementation plan. However, for the eight remaining implementation plans, EPA has proposed using deliverables and milestones to track its progress. For example, in its implementation plan for incorporating environmental justice into rulemaking, EPA committed to completing final technical guidance on considering environmental justice during the rulemaking process by fiscal year 2013. EPA has not, however, developed clearly defined, quantifiable performance measures for assessing the extent that each of its programs are incorporating the guidance in their rulemaking activities, the cost of its implementation, and its impact on EPA decisions. Deliverables and milestones can be important indicators of progress but are not adequate substitutes for performance measures. We have reported in the past that performance measures are a key element of effective strategic planning. They provide organizations with the ability to track the e progress they are making toward their mission and goals, and providmanagers with information on which to base their organizational and management decisions, including how effectively program and regiona offices are integrating environmental justice in their decisions. l Performance measures also create powerful incentives to influence organizational and individual behavior. Individual performance measures cess), may address the type or level of program activities conducted (pro the direct products and services delivered by a program (outputs), or the results of those products and services (outcomes). We have also reported on the attributes most often associated with successful performance measures. More specifically, we reported that successful performance measures typically consist of nine attributes, which are summarized in table 3. Further, we have reported that developing performance measures requires coordinated planning. Agencies that are successful in measuring performance take a systematic approach to identifying and refining potential measures, such as (1) developing models that describe how a program’s activities produce outputs, such as the number of grants awarded, and how these outputs are connected to intermediate and end outcomes, or results, and (2) using rigorous criteria to select the most important performance measures. The EPA officials we interviewed told us that the agency plans to develop performance measures linked to its Plan EJ 2014 goals, but it has not done so primarily because developing these measures is challenging and resource-intensive. We acknowledge that developing performance measures requires considerable thought and, in some cases, can be resource intensive. However, without performance measures that align with EPA’s Plan EJ 2014 goals, the agency will lack the information it needs to assess how effectively the agency is performing relative to its environmental justice goals and the effect of its overall environmental justice efforts on intended communities. EPA’s renewed commitment to environmental justice has led to a number of actions, including revitalizing stakeholders’ involvement and developing agencywide implementation plans. In carrying out these efforts, the agency has generally followed most of the leading practices we reviewed in federal strategic planning. However, without additional progress on these practices, EPA cannot assure itself, its stakeholders, and the public that it has established a framework to effectively guide and assess its efforts to integrate environmental justice into the fabric of the agency. In particular, EPA has not yet established a strategy for how it will address the management challenges of defining key environmental justice terms or identifying the resources needed to accomplish its environmental justice integration goals. Without a clear strategy for how the agency will define key environmental justice terms, EPA may not be able to overcome the long-standing challenge of establishing a consistent and transparent approach for identifying potential communities with environmental justice concerns. In addition, without a clear understanding of the resources needed to integrate environmental justice considerations throughout the agency, EPA cannot ensure that its current staffing and funding resources are sufficient to meet its environmental justice goals. Moreover, without this information, EPA may find itself unable to successfully adapt to future changes in workload, which are expected as a result of a greater emphasis on environmental justice, or potential changes in future funding levels. EPA has also not articulated in its implementation plans how states will be meaningfully involved in the ongoing planning and subsequent implementation of its environmental justice integration efforts. Without articulating clearly in its plans the roles and responsibilities of states, EPA cannot ensure that states are meaningfully involved in the planning and implementation of its environmental justice integration efforts, including efforts involving permits and enforcement and compliance. Finally, EPA does not have performance measures for eight of its Plan EJ 2014 implementation plans. Without performance measures that align with EPA’s Plan EJ 2014 goals, the agency will lack the information it needs for EPA managers to effectively assess how the agency is performing relative to its environmental justice goals and the effect of its overall environmental justice efforts on intended communities. To ensure that EPA continues to make progress toward the effective integration of environmental justice considerations into the agency’s programs, policies, and activities, we recommend that the Administrator of EPA direct the appropriate offices to take the following four actions:  Develop a clear strategy to define key environmental justice terms in order to help the agency establish a consistent and transparent approach for identifying potential communities with environmental justice concerns.  Conduct an assessment of the resources needed under its current plans to integrate environmental justice considerations throughout the agency to help ensure that EPA’s staffing and funding resources are sufficient to meet current environmental justice goals and future changes in workload, such as provision of training to support use of key tools and guidance and potential changes in funding levels.  Articulate clearly in its plans the roles and responsibilities of states and continue recently initiated outreach efforts to help ensure that states are meaningfully involved in ongoing environmental justice planning and the subsequent implementation of Plan EJ 2014.  Develop performance measures for Plan EJ 2014 to provide EPA managers with the information necessary to assess how effectively the agency is performing relative to its environmental justice goals and the effect of its overall environmental justice efforts on intended communities. We provided a draft copy of this report to EPA for review and comment. We received a written response from the Assistant Administrator for the Office of Enforcement and Compliance Assurance on behalf of several EPA programs that work with EPA’s Office of Environmental Justice. EPA disagreed with two recommendations, partially agreed with one recommendation, and did not directly address one other recommendation in the report. Overall, EPA agreed that additional work is needed to ensure successful and effective implementation of Plan EJ 2014, the agency’s environmental justice strategy. EPA noted that our report provides a good overview of EPA’s progress and challenges in recent years in the agency’s environmental justice efforts and that our recommendations are particularly insightful and helpful as the agency begins to implement Plan EJ 2014. In its comments, EPA disagreed with our recommendation to develop a strategy for defining key environmental justice terms in order to provide greater consistency in how environmental justice communities are identified. Instead, EPA believes that it can better identify communities overburdened by pollution, including those that are minority and low- income, by developing a nationally consistent environmental justice screening tool. EPA noted that the tool will allow the agency to meet its responsibility for protecting public health and the environment in a manner consistent with Executive Order 12898 and the agency’s goals under Plan EJ 2014. We acknowledge EPA’s efforts to develop a nationally consistent environmental justice screening tool (EJ SCREEN). However, in the course of our review, the EPA officials responsible for developing EJ SCREEN repeatedly cautioned us that this tool would have very limited capabilities and would need to be supplemented with additional information in order to adequately identify such communities. While agency officials informed us that EJ SCREEN will ultimately contain some definitions for environmental justice terms, these definitions will be limited to the screening tool’s use and would not have agencywide application. Absent definitions of key environmental justice terms that have agency- wide application, integration efforts are likely to be inconsistent across EPA’s program and regional offices. As noted earlier, the EPA Inspector General identified such inconsistencies in 2004 and noted that such differences among EPA regional offices in identifying environmental justice communities were largely due to the lack of standard definitions for basic environmental justice terms, such as minority and low-income. We believe that defining key environmental justice terms establishes a foundation on which EPA could more consistently identify minority or low- income communities disproportionately impacted by environmental or health hazards. Without this foundation, EPA environmental justice efforts will heavily rely on the interpretations of individual managers rather than a consistent agencywide approach. EPA also disagreed with our recommendation to conduct a resource assessment for the activities associated with Plan EJ 2014. EPA noted that environmental justice is the responsibility of every program office and region. EPA stated that it will proactively monitor the agency’s progress in meeting the milestones and delivering the products identified in each of the Plan EJ 2014 implementation plans and will modify the implementation plans, as necessary, to reflect the need for training and other implementation support activities. While monitoring the agency’s progress in meeting Plan EJ 2014 goals is important, accounting for the resources committed to Plan EJ 2014 is essential for effective program management. Leading practices suggest that properly accounting for program resources, including funding and staffing, enables managers to better manage existing resources and plan for future programmatic needs. Such an assessment is particularly important in times when resources are constrained or are in danger of being either reduced or eliminated. Additionally, as we mentioned in our report, the EPA IG in December 2010 found that EPA did not have the internal controls necessary to properly determine that the agency has the right number of resources to accomplish its mission. Consequently, without a clear understanding of the resources needed, the agency’s ability to achieve its environmental justice integration goals might be compromised. EPA partially agreed with our recommendation to continue its outreach efforts to states, but did not address a portion of the recommendation that called for EPA to more clearly articulate the roles and responsibilities of states in their Plan EJ 2014 implementation plans. EPA stated that the agency believes outreach to states and their meaningful involvement is important and expects these kinds of efforts to increase as the implementation of Plan EJ 2014 progresses. EPA specifically noted that outreach to states is established in its draft Plan EJ 2014 Outreach and Communications plan and is articulated in each implementation plan, as appropriate. EPA further noted that the involvement of states will vary by the nature of the work outlined in each implementation plan. We acknowledge that EPA has made progress in engaging states in Plan EJ 2014 and its associated implementation plans. Furthermore, we encourage EPA to continue its outreach efforts to help ensure that states are meaningfully involved in the agency’s environmental justice integration efforts. While EPA’s draft Plan EJ 2014 Outreach and Communications plan does provide for state involvement, the associated implementation plans do not contain sufficient detail on how states will be involved in EPA’s environmental justice planning efforts or their subsequent implementation. Because states play an integral part in the implementation of environmental justice, particularly as it relates to permitting, it is also important that states have a clear understanding of their respective roles and responsibilities. As an acknowledged roadmap for the agency’s environmental justice efforts, Plan EJ 2014 and its related documents should clearly articulate the roles and responsibilities of all key stakeholders. Finally, EPA did not directly address our recommendation that the agency develop performance measures; rather, EPA said that it agreed that as the agency moves forward with implementing Plan EJ 2014, it should use and strengthen performance measures and develop other ways to ensure timely and effective implementation of the plan. EPA noted that it is currently relying on milestones and deliverables to monitor progress in the implementation of Plan EJ 2014. While project milestones and deliverables can provide valuable information on the progress of Plan EJ 2014 implementation, these measures do not adequately replace performance measures. As we reported, only 1 of the 9 Plan EJ 2014 implementation plans contained performance measures. Consequently, while EPA managers may be able to determine if Plan EJ 2014 is on track for meeting the plan’s milestones and deliverables, they cannot determine whether the plan is ultimately achieving meaningful results, which performance measures would help the agency to discern. For this reason, EPA needs to develop performance measures for each of the implementation plans and incorporate these measures, as appropriate. In its comment letter, EPA notified us that Plan EJ 2014 and its implementation plans would be finalized in September 2011. As noted, our analysis for this report was based on draft versions of EPA’s planning documents because they had not yet been finalized at the time we sent our draft to EPA for review and comment. EPA released the plans publicly on September 14, as we were preparing to issue our report. Nevertheless, we did review the final plans and confirmed that they were not substantively different from the draft versions on which we based our conclusions and recommendations. EPA’s comments are presented in appendix II of this report. EPA also provided technical comments on the draft report, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, Administrator of EPA, and other interested parties. The report also will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or yocomc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To examine how EPA is implementing its environmental justice efforts, we analyzed key EPA documents to identify offices with environmental justice responsibilities. Based on these documents, we interviewed senior officials from EPA’s Office of Environmental Justice and Office of Enforcement and Compliance Assurance to understand the roles and responsibilities of key offices, staff positions, and councils for implementing environmental justice efforts and to understand changes that EPA has undertaken in the organizational structure of environmental justice functions under the current Administration. To evaluate the extent to which EPA is following leading strategic planning practices in establishing a framework for integrating environmental justice in its programs, policies, and activities, we identified seven leading practices in federal strategic planning by reviewing (1) practices required at the federal department/agency level under the Government Performance and Results Act (GPRA) of 1993, which we have previously reported also can serve as leading practices for planning at lower levels within federal agencies such as individual programs or initiatives; (2) practices identified in Office of Management and Budget (OMB) guidance to federal agencies for implementing GPRA’s requirements; and (3) related leading practices that GAO’s past work has identified. We selected the six leading practices because EPA’s environmental justice efforts are in the initial planning stage and we judged these practices to be the most relevant to evaluating EPA’s environmental justice strategic planning actions. We determined that other practices we have reported on in the past overlapped, to some degree, with the six selected practices. We also did not consider all of the elements that GPRA and OMB guidance requires an agency include in its agencywide strategic plan because our focus was on EPA’s planning process and not on the structure of its planning documents. We also reviewed recommendations made by EPA’s Office of Inspector General (IG) in 2004 regarding EPA’s management of its environmental justice efforts. We compared the planning activities associated with EPA’s environmental justice framework, i.e., EPA’s Fiscal Year 2011-2015 Strategic Plan, Plan EJ 2014, and the nine Plan EJ 2014 implementation plans, to the six leading practices, as shown in table 4. We reviewed EPA’s draft Plan EJ 2014 Outreach and Communications Plan, but did not assess it as part of the leading practices analysis because this plan was still in the early stages of development. Our analysis for this report was based primarily on draft versions of EPA’s Plan EJ 2014 and its implementation plans because these documents were not finalized until mid-September 2011, as we were preparing to issue our report. Nevertheless, we did review the final plans and confirmed that they were not substantively different from the draft versions on which we based our conclusions and recommendations. We also interviewed senior EPA officials from key offices involved with integrating environmental justice in the agency, including EPA’s Office of Enforcement and Compliance Assurance, Office of Environmental Justice, Office of Air and Radiation, Office of Water, Office of Solid Waste and Emergency Response, Office of Policy, and Office of Chief Financial Officer to clarify the nature and intent of the agency’s activities. We also spoke with EPA officials about the extent they have incorporated past EPA IG recommendations in their current environmental justice efforts. Finally, we interviewed external stakeholders about their involvement in EPA’s environmental justice planning efforts. Specifically, we interviewed select members of the National Environmental Justice Advisory Council (NEJAC) and representatives from the Environmental Council of States, National Association of Clean Air Agencies, and the Association of State and Territorial Solid Waste Management Officials. We also discussed EPA’s actions to address the EPA IG’s 2004 recommendations with officials from the Office of Inspector General to obtain their views on EPA’s current actions. In addition to agency interviews, we participated in several EPA outreach teleconferences, as well as attended NEJAC public meetings held in July and November 2010. We conducted this performance audit from May 2010 through September 2011, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Vincent P. Price, Assistant Director; Elizabeth Beardsley; Elizabeth Curda; Pamela Davidson; Brian M. Friedman; John Johnson; Benjamin T. Licht; Alison O’Neill; Kiki Theodoropoulos; Jarrod West; and Eugene Wisnoski made key contributions to this report.
The Environmental Protection Agency (EPA) is responsible for promoting environmental justice--that is, the fair treatment and meaningful involvement of all people in developing, implementing, and enforcing environmental laws, regulations, and policies. In January 2010, the EPA Administrator cited environmental justice as a top priority for the agency. GAO was asked to examine (1) how EPA is implementing its environmental justice efforts, and (2) the extent that EPA has followed leading federal strategic planning practices in establishing a framework for these efforts. To conduct this work, GAO reviewed EPA strategy documents and interviewed agency officials and key stakeholders. In recent years, EPA has renewed its efforts to make environmental justice an important part of its mission by developing a new strategy and approach for integrating environmental justice considerations into the agency's programs, policies, and activities. Under Plan EJ 2014, the agency's 4-year environmental justice implementation plan, EPA's program and regional offices are assuming principal responsibility for integrating the agency's efforts by carrying out nine implementation plans to put Plan EJ 2014 into practice. An important aspect of Plan EJ 2014 is to obtain input on major agency environmental justice initiatives from key stakeholders, including the National Environmental Justice Advisory Council, the Federal Interagency Working Group on Environmental Justice, impacted communities, and states. In developing its environmental justice framework, which consists of agency initiatives, including Plan EJ 2014 and the implementation plans, EPA generally followed most of the six leading federal strategic planning practices that we selected for review. For example, EPA has generally defined a mission and goals for its environmental justice efforts, ensured leadership involvement and accountability for these efforts, and coordinated with other federal agencies--all consistent with leading practices in federal strategic planning. However, EPA has not yet fully (1) established a clear strategy for how it will define key environmental justice terms or identified the resources it may need to carry out its environmental justice implementation plans, (2) articulated clearly states' roles in ongoing planning and environmental justice integration efforts, or (3) developed performance measures for eight of its nine implementation plans to track agency progress on its environmental justice goals. Without additional progress on these practices, EPA cannot assure itself, its stakeholders, and the public that it has established a framework to effectively guide and assess its efforts to integrate environmental justice across the agency. GAO is recommending that EPA develop a clear strategy to define key environmental justice terms; conduct a resource assessment; articulate clearly states' roles in ongoing planning and future implementation efforts; and develop performance measures to track the agency's progress in meeting its environmental justice goals. GAO provided a draft of this report to EPA for comment. EPA disagreed with two of GAO's recommendations, partially agreed with one recommendation, and did not directly address the remaining recommendation. GAO believes that the recommended actions will help EPA ensure clear, consistent, and measurable progress as it moves forward in implementing Plan EJ 2014.
The structure of DHS’s acquisition function creates ambiguity about who is accountable for acquisition decisions. A common theme in our work on DHS’s acquisition management has been the department’s struggle from the outset to provide adequate support for its mission components and resources for departmentwide oversight. Of the 22 components that initially joined DHS from other agencies, 7 came with their own procurement support. In January 2004, a year after the department was created, an eighth office, the Office of Procurement Operations, was created to provide support to a variety of DHS entities. To improve oversight, in December 2005, CPO established a departmentwide acquisition oversight program, designed to provide comprehensive insight into each component’s acquisitions and disseminate successful acquisition management approaches throughout DHS. DHS has set a goal of integrating the acquisition function more broadly across the department. Prior GAO work has shown that to implement acquisition effectively across a large federal organization requires an integrated structure with standardized policies and processes, the appropriate placement of the acquisition function within the department, leadership that fosters good acquisition practices, and a general framework that delineates the key phases along the path for a major acquisition. An effective acquisition organization has in place knowledgeable personnel who work together to meet cost, quality, and timeliness goals while adhering to guidelines and standards for federal acquisition. DHS, however, relies on dual accountability and collaboration between the CPO and the heads of DHS’s components. The October 2004 management directive for its acquisition line of business—the department’s principal guidance for leading, governing, integrating, and managing the acquisition function—allows managers from each component organization to commit resources to training, development, and certification of acquisition professionals. It also highlights the CPO’s broad authority, including management, administration, and oversight of departmentwide acquisition. However, we have reported that the directive may not achieve its goal of creating an integrated acquisition organization because it creates unclear working relationships between the CPO and the heads of DHS components. For example, some of the duties delegated to the CPO have also been given to the heads of DHS’s components, such as recruiting and selecting key acquisition officials at the components, and providing appropriate resources to support the CPO’s initiatives. Accountability for acquisitions is further complicated because, according to DHS, the Coast Guard and Secret Service were exempted from its acquisition management directive because of DHS’s interpretation of the Homeland Security Act. We have questioned this exemption, and recently CPO officials have told us that they are working to revise the directive to make it clear that the Coast Guard and Secret Service are not exempt. Furthermore, for major investments—those exceeding $50 million—accountability, visibility, and oversight is shared among the CPO, the Chief Financial Officer, the Chief Information Officer, and other senior management. Recently, the DHS Inspector General’s 2007 semiannual report stated an integrated acquisition system still does not exist, but noted that that the atmosphere for collaboration between DHS and its component agencies on acquisition matters has improved. In addition, our work and the work of the DHS Inspector General has found acquisition workforce challenges across the department. In 2005, we reported on disparities in the staffing levels and workload among the component procurement offices. We recommended that DHS conduct a departmentwide assessment of the number of contracting staff, and if a workload imbalance were to be found, take steps to correct it by realigning resources. In 2006, DHS reported significant progress in providing staff for the component contracting offices, though much work remained to fill the positions with qualified, trained acquisition professionals. DHS has established a goal of aligning procurement staffing levels with contract spending at its various components by the last quarter of fiscal year 2009. Staffing of the CPO Office also has been a concern, but recent progress has been made. According to CPO officials, their small staff faces the competing demands of providing acquisition support for urgent needs at the component level and conducting oversight. For example, CPO staff assisted the Federal Emergency Management Agency in contracting for the response to Gulf Coast hurricanes Katrina and Rita. As a result, they needed to focus their efforts on procurement execution rather than oversight. In 2005, we recommended that the Secretary of Homeland Security provide the CPO with sufficient resources to effectively oversee the department’s acquisitions. In 2006, we reported that the Secretary had supported an increase of 25 positions for the CPO to improve acquisition management and oversight. DHS stated that these additional personnel will significantly contribute to continuing improvement in the DHS acquisition and contracting enterprise. To follow-up on some of these efforts, we plan to conduct additional work on DHS acquisition workforce issues in the near future. Our prior work has shown that in a highly functioning acquisition organization, the CPO is in a position to oversee compliance by implementing strong oversight mechanisms. Accordingly, in December 2005, the CPO established a departmentwide acquisition oversight program, designed to provide comprehensive insight into each component’s acquisition programs and disseminate successful acquisition management approaches throughout DHS. The program is based in part on elements essential to an effective, efficient, and accountable acquisition process: organizational alignment and leadership, policies and processes, financial accountability, acquisition workforce, and knowledge management and information systems. The program includes four recurring reviews, as shown in table 1. In September 2006, we reported that the CPO’s limited staff resources had delayed the oversight program’s implementation, but the program is well under way, and DHS plans to implement the full program in fiscal year 2007. Recently, the CPO has made progress in increasing staff to authorized levels, and as part of the department’s fiscal year 2008 appropriation request, the CPO is seeking three additional staff, for a total of 13 oversight positions for this program. We plan to report on the program later this month. While this program is a positive step, we have reported that the CPO lacks the authority needed to ensure the department’s components comply with its procurement policies and procedures such as the acquisition oversight program. We reported in September 2006 that the CPO’s ability to effectively oversee the department’s acquisitions and manage risks is limited, and we continue to believe that the CPO’s lack of authority to achieve the department’s acquisition goals is of concern. In 2003, DHS put in place an investment review process to help protect its major, complex investments. The investment review process is intended to reduce risk associated with these investments and increase the chances for successful outcomes in terms of cost, schedule, and performance. In March 2005, we reported that in establishing this process, DHS has adopted a number of acquisition best practices that, if applied consistently, could help increase the chance for successful outcomes. However, we noted that incorporating additional program reviews and knowledge deliverables into the process could better position DHS to make well-informed decisions on its major, complex investments. Specifically, we noted that the process did not include two critical management reviews that would help ensure that (1) resources match customer needs prior to beginning a major acquisition and (2) program designs perform as expected before moving to production. We also noted that the review process did not fully address how program managers are to conduct effective contractor tracking and oversight. The investment review process is still under revision, and the department’s performance and accountability report for fiscal year 2006 stated that DHS will incorporate changes to the process by the first quarter of fiscal year 2008. Our best practices work shows that successful investments reduce risk by ensuring that high levels of knowledge are achieved at these key points of development. We have found that investments that were not reviewed at the appropriate points faced problems—such as redesign—that resulted in cost increases and schedule delays. Concerns have been raised about the effectiveness of the review process for large investments at DHS. For example, in November 2006, the DHS Inspector General reported on the Customs and Border Protection’s Secure Border Initiative program, noting that the department’s existing investment oversight processes were sidelined in the urgent pursuit of SBInet’s aggressive schedule. The department’s investment review board and joint requirements council provide for deliberative processes to obtain the counsel of functional stakeholders. However, the DHS Inspector General reported that for SBInet, these prescribed processes were bypassed and key decisions about the scope of the program and the acquisition strategy were made without rigorous review and analysis or transparency. The department has since announced plans to complete these reviews to ensure the program is on the right track. To quickly get the department up and running and to obtain necessary expertise, DHS has relied extensively on other agencies’ and its own contracts for a broad range of mission-related services and complex acquisitions. Governmentwide, increasing reliance on contractors has been a longstanding concern. Recently, in 2006, government, industry and academic participants in a GAO forum on federal acquisition challenges and opportunities noted that many agencies rely extensively on contractors to carry out their basic missions. The growing complexity of contracting for technically difficult and sophisticated services increases challenges in terms of setting appropriate requirements and effectively monitoring contractor performance. With the increased reliance on contractors comes the need for an appropriate level of oversight and management attention to its contracting for services and major systems. Our work to date has found that DHS faces challenges in managing services acquisitions through interagency contracting—a process by which agencies can use another agency’s contracting services or existing contracts often for a fee. In 2005, DHS spent over $6.5 billion on interagency contracts. We found that DHS did not systematically monitor or assess its use of interagency contracts to determine whether this method provides good outcomes for the department. Although interagency contracts can provide the advantages of timeliness and efficiency, use of these types of vehicles can also pose risks if they are not properly managed. GAO designated management of interagency contracting a governmentwide high risk area in 2005. A number of factors can make these types of contracts high risk, including their use by some agencies that have limited expertise with this contracting method and their contribution to a much more complex procurement environment in which accountability has not always been clearly established. In an interagency contracting arrangement, both the agency that holds and the agency that makes purchases against the contract share responsibility for properly managing the use of the contract. However, these shared responsibilities often have not been well defined. As a result, our work and that of some agency inspectors general has found cases in which interagency contracting has not been well managed to ensure that the government is getting good value. Government agencies, including DHS components, have turned to a systems integrator in situations such as when they believe they do not have the in-house capability to design, develop, and manage complex acquisitions. This arrangement creates an inherent risk, as the contractor is given more discretion to make certain program decisions. Along with this greater discretion comes the need for more government oversight and an even greater need to develop well-defined outcomes at the outset. Our reviews of the Coast Guard’s Deepwater program have found that the Coast Guard had not effectively managed the program or overseen the system integrator. Specifically, we expressed concerns and made a number of recommendations to improve the program in three areas: program management, contractor accountability, and cost control through competition. While the Coast Guard took some actions in response to some of our concerns, they have recently announced a series of additional steps to address problems with the Deepwater program, including taking on more program management responsibilities from the systems integrator. We also have ongoing work reviewing other aspects of DHS acquisition management. For example, we are reviewing DHS’s contracts that closely support inherently governmental functions and the level of oversight given to these contracts. Federal procurement regulation and policy contain special requirements for overseeing service contracts that have the potential for influencing the authority, accountability, and responsibilities of government officials. Agencies are required to provide greater scrutiny of these service contracts and an enhanced degree of management oversight, which includes assigning a sufficient number of qualified government employees to provide oversight, to better ensure that contractors do not perform inherently governmental functions. The risks associated with contracting for services that closely support the performance of inherently governmental functions are longstanding governmentwide concerns. We are also reviewing oversight issues related to DHS’s use of performance-based services acquisitions. If this acquisition method is not appropriately planned and structured, there is an increased risk that the government may receive products or services that are over cost estimates, delivered late, and of unacceptable quality. Since DHS was established in 2003, it has been challenged to integrate 22 separate federal agencies and organizations with multiple missions, values, and cultures into one cabinet-level department. Due to the complexity of its organization, DHS is likely to continue to face challenges in integrating the acquisition functions of its components and overseeing their acquisitions—particularly those involving large and complex investments. Given the size of DHS and the scope of its acquisitions, we are continuing to assess the department’s acquisition oversight process and procedures in ongoing work. Mr. Chairman, this concludes my statement. I would be happy to respond to any questions you or other members of the subcommittee may have at this time. For further information regarding this testimony, please contact John Hutton at (202) 512-4841 or huttonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this product. Other individuals making key contributions to this testimony were Amelia Shachoy, Assistant Director; Tatiana Winger; William Russell; Heddi Nieuwsma; Karen Sloan; and Sylvia Schatz. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In fiscal year 2006, the Department of Homeland Security (DHS) obligated $15.6 billion to support its broad and complex acquisition portfolio. Since it was tasked with integrating 22 separate federal agencies and organizations into one cabinet-level department, DHS has been working to create an integrated acquisition organization while addressing its ongoing mission requirements and responding to natural disasters and other emergencies. Due to the enormity of this challenge, GAO designated the establishment of the department and its transformation as high-risk in January 2003. This testimony discusses DHS's (1) challenges to creating an integrated acquisition function; (2) investment review process; and (3) reliance on contracting for critical needs. This testimony is based primarily on prior GAO reports and testimonies. The structure of DHS's acquisition function creates ambiguity about who is accountable for acquisition decisions because it depends on a system of dual accountability and collaboration between the Chief Procurement Officer (CPO) and the component heads. Further, a common theme in GAO's work on acquisition management has been DHS's struggle to provide adequate support for its mission components and resources for departmentwide oversight. In 2006, DHS reported significant progress in staffing for the components and the CPO, though much work remained to fill the positions. In addition, DHS has established an acquisition oversight program, designed to provide the CPO comprehensive insight into each component's acquisition programs and disseminate successful acquisition management approaches departmentwide. However, GAO continues to be concerned that the CPO may not have sufficient authority to effectively oversee the department's acquisitions. In 2003, DHS put in place an investment review process to help protect its major complex investments. In 2005, GAO reported that this process adopted many acquisition best practices that, if applied consistently, could help increase the chances for successful outcomes. However, GAO noted that incorporating additional program reviews and knowledge deliverables into the process could better position DHS to make well-informed decisions. Concerns have been raised about how the investment review process has been used to oversee its largest acquisitions, and DHS plans to revise the process. DHS has contracted extensively for a broad range of services and complex acquisitions. The growing complexity of contracting for technically difficult and sophisticated services increases challenges in terms of setting appropriate requirements and effectively monitoring contractor performance. However, DHS has been challenged to provide the appropriate level of oversight and management attention to its contracting for services and major systems.
Plutonium is a man-made, radioactive element that exists in different isotopes and physical forms. The different isotopes of plutonium have widely varying half-lives, ranging from 20 minutes to 76 million years. These isotopes are used to define the different grades of plutonium that are used in nuclear warheads and as fuel for nuclear reactors. Physically, plutonium exists in several forms—metal, which is relatively stable if packaged correctly, and other forms that are often unstable, such as oxides, solutions, residues, and scraps. During the production era, DOE recycled, purified, and converted the less stable forms of plutonium, which resulted from weapons production activities, into metal for use in nuclear warheads. Much of DOE’s excess plutonium was not in a suitable form or packaged for long-term storage when weapons production ceased. As a result, some packaging and related problems have developed over time. (See app. I.) From World War II to the end of the Cold War, DOE and its predecessor agencies conducted nuclear research, produced plutonium, and manufactured and tested nuclear weapons at sites throughout the United States. No plutonium has been produced for weapons since 1988. The 99.5 metric tons of plutonium that remain in the U.S. government’s inventory today is in the custody of the Department of Defense (DOD) and DOE. DOD has custody of the plutonium in warheads in the nuclear weapons stockpile, which are located at military bases around the world, and DOE manages the rest of the plutonium, which is located primarily at eight DOE sites: Argonne National Laboratory-West, Hanford Site, Idaho National Engineering and Environmental Laboratory (INEEL), Lawrence Livermore National Laboratory, Los Alamos National Laboratory, Pantex Plant, Rocky Flats Environmental Technology Site, and Savannah River Site. (See fig. 1.) Although DOE no longer produces plutonium for weapons, some of the plutonium it produced in the past continues to present environmental, safety, and health hazards, as well as concerns about proliferation, and therefore requires careful management. The hazards and concerns associated with plutonium include the following: Plutonium is extremely toxic and can be fatal, especially when inhaled. Several kilograms of plutonium are sufficient to make a nuclear bomb. Although attempts are made to control access to nuclear materials, thefts have occurred in the former Soviet Union since the end of the Cold War, raising concerns about nuclear proliferation and international terrorism. Today, land, buildings, and equipment used in making nuclear weapons, remain contaminated and present environmental hazards. To address these hazards, DOE expects to spend nearly $229 billion over the next 75 years. Although DOE does not track cleanup costs specifically for plutonium, a major portion of these costs can likely be attributed to plutonium or related activities. Additional information on the dangers of plutonium is provided in appendix I. Even though the United States no longer manufactures new nuclear weapons, some plutonium is still used in nuclear weapons and for research, development, and testing programs. DOD establishes nuclear weapons requirements, and DOE subsequently determines how much plutonium is necessary to support these requirements. The Nuclear Weapons Council (NWC) coordinates nuclear program activities between DOD and DOE and submits documents containing weapons requirements to the National Security Council and the President for approval. The nation’s 99.5-metric-ton inventory of plutonium is divided into two categories—that which is allocated for national security (46.8 metric tons) and that which is designated as excess (52.7 metric tons). The national security plutonium is further allocated among several subcategories. Although DOE could justify most of these allocations, we found that it had no technical basis for the amounts of plutonium allocated for reliability replacement warheads and for the strategic reserve. In 1995, for the first time in the history of the U.S. nuclear weapons program, the United States declared that 38.2 metric tons of weapons-grade plutonium was no longer needed for national security and was, therefore, excess. (In addition, DOE designated 14.5 metric tons of non-weapons-grade plutonium as excess.) According to DOE, this declaration was an important step in implementing the Nonproliferation and Export Control Policy, which was issued by the President in September 1993. This policy calls for the United States to eliminate, where possible, the accumulation of plutonium stockpiles and prevent the proliferation of weapons of mass destruction. According to DOE officials, DOE reviewed its existing plutonium inventory records to determine how much of its weapons-grade plutonium was needed for national security. All weapons-grade plutonium that was in the custody of DOD in the active and inactive stockpile and some of the weapons-grade plutonium assigned to and managed by DOE’s Defense Programs organization was categorized as needed for national security. This plutonium is for use in nuclear weapons; the strategic reserve; mutual defense; and research, development, and testing programs. All other plutonium that was assigned to or managed by any other DOE organizations (as well as the plutonium remaining with Defense Programs that was not required for national security) was categorized as not needed for national security. Ultimately, DOE will dispose of this excess plutonium. On the basis of this inventory review, DOE decided that 46.8 metric tons of weapons-grade plutonium should be held for national security and that the remaining 52.7 metric tons of plutonium—including 38.2 metric tons of weapons-grade and 14.5 metric tons of non-weapons-grade—could be declared excess to national security needs. The categorization of the current U.S. plutonium inventory is shown in table 1. Significant changes in the amounts of plutonium dedicated to national security are unlikely in the near future. According to DOE officials, the United States has no plans to formally declare additional amounts of plutonium excess to national security needs. According to one DOE official, any future declarations would depend on international agreements or political decisions, such as (1) Russia’s ratification of the second Strategic Arms Reduction Treaty (START-II); (2) ratification of possible additional weapons reduction treaties, like START-III; or (3) a change in the role of nuclear weapons in the nation’s defense posture. However, even these events would not necessarily result in additional declarations of excess plutonium. Instead, according to a DOE official, decreases in the active stockpile may be offset by reclassifying some of the plutonium from the active stockpile to the inactive stockpile or the strategic reserve. Therefore, even if the number of active warheads decreases, the total amount of plutonium allocated for national security will likely remain at 46.8 metric tons. The national security plutonium is allocated among four categories, and the amounts in these categories are classified. According to DOE, the allocation for the first and second categories, warheads in the active and inactive nuclear weapons stockpile, are in weapons in the custody of DOD. The remainder of the national security plutonium, managed by DOE, is allocated to the strategic reserve and to mutual defense and research and development programs. Although DOE could justify the amounts of plutonium allocated to most of these categories, it could not provide a technical basis for the amounts allocated for reliability replacement warheads within the inactive stockpile and for the strategic reserve. Table 2 lists the allocations of national security plutonium and their principal uses and indicates whether the allocations have a technical basis. As table 2 indicates, DOE appeared to have a technical basis for most of the allocations of national security plutonium. DOE provided the following justifications for these allocations: The allocation for the active stockpile is determined through an annual process driven by DOD’s nuclear weapons requirements. DOD determines the types and numbers of weapons it wants to support national security needs, and DOE determines how much plutonium is needed for the required warheads and for their support. Augmentation warheads in the inactive stockpile are reserved to allow DOD and DOE to raise the active stockpile levels if necessary. Additional warheads in the inactive stockpile are held to replace warheads that are removed from the active stockpile and used for testing. The number of warheads needed as replacements is based on requirements of DOE’s Quality Assurance and Reliability Testing Program. The amount of plutonium held for mutual defense is based on signed agreements between the United States and its allies. The plutonium held for research and development is used by DOE’s laboratories and its amount is based on an established forecast and allotment system. While DOE appeared to have adequate justification for these allocations of national security plutonium, it could not justify the allocations of plutonium for reliability replacement warheads in the inactive stockpile or for the strategic reserve, which represent a significant portion of the national security plutonium: Neither DOE nor NWC officials could demonstrate a basis for the number of reliability replacement warheads being held to replace active stockpile warheads in case they develop reliability or safety problems. DOE and NWC could not demonstrate that an analysis of the failure rate for active warheads had been conducted or that a technical assessment had been done to determine the need for this level of backup support. According to DOE, the plutonium held in the strategic reserve is for rapidly building warheads to respond to unforeseen events (such as warhead failures) that are not already provided for in the inactive stockpile. However, neither DOE nor NWC officials could demonstrate that a technical analysis had been conducted to justify the amount of plutonium held for this purpose. DOE officials believe that the allocations of plutonium for reliability replacement warheads and for the strategic reserve are prudent because (1) nuclear weapons are required to deter forces hostile to the United States and its allies; (2) no new nuclear weapons are currently being designed, developed or manufactured; (3) the United States has no active underground nuclear testing program; and (4) nuclear weapons in the stockpile are being retained beyond their original expected service life. For these reasons, DOD and DOE, in deciding how much plutonium to hold for reliability replacement warheads and for the strategic reserve, assume that all of the nuclear warheads in the active stockpile will fail. Therefore, DOD and DOE believe that each active warhead needs to be supported either by a backup warhead in the reliability replacement category or by plutonium in the strategic reserve. While we recognize the prudence of holding some plutonium for these reasons, we question whether there is a technical basis for the amounts of plutonium being held in these two subcategories. Without a technical basis, the United States cannot be sure it is retaining the correct amount of plutonium for national security purposes. DOE estimates that it spends more than $2 billion a year, or over 12 percent of its current annual budget, to manage its plutonium inventory and perform other plutonium-related activities. Because excess plutonium is often held in unstable forms—such as oxides, solutions, residues, and scraps—it requires many management activities and is therefore costly to manage. In contrast, national security plutonium is generally stored in sealed metal weapons components, is relatively stable, and is therefore less costly to manage. However, the costs of managing excess plutonium are expected to decline after it is disposed of in a permanent repository,while the costs of managing national security plutonium are likely to continue indefinitely. From fiscal year 1995 through fiscal year 2002, DOE expects to spend about $18.8 billion on plutonium management and related activities at the eight sites responsible for managing most of its plutonium. These costs include about $10.5 billion for plutonium inventory management and about $8.3 billion for plutonium-related waste management and site cleanup. The inventory management costs include about $8.7 billion for excess plutonium and about $1.8 billion for national security plutonium. The inventory management costs included in DOE’s estimate are for (1) storing and maintaining the plutonium inventories, including providing safeguards and security; (2) stabilizing, handling, and packaging the plutonium; (3) performing weapons-related activities, such as disassembling and dismantling weapons, managing the active stockpile, and conducting research and development; and (4) other activities, mainly managing DOE’s spent nuclear fuel containing plutonium. Plutonium-related waste management and site cleanup activities are generally attributable to past plutonium production or other plutonium-related activities at the sites. Thus, their associated costs cannot be linked directly to either excess or national security plutonium. Table 3 summarizes DOE’s estimates of these costs. As shown in table 3, over 80 percent ($8,686 million) of DOE’s inventory management costs are attributable to excess plutonium, while less than 20 percent ($1,793 million) are attributable to national security plutonium. The costs of managing excess plutonium are high because much of it—including some oxides, solutions, residues, and scraps—is unstable and requires costly handling, processing, packaging, and storage. At many DOE facilities, the plutonium in these forms remained in an unsafe condition after DOE stopped producing plutonium and nuclear weapons. As a result, contractors at these facilities are still stabilizing the plutonium and correcting packaging problems that remained when weapons production ceased. At Rocky Flats, for example, some of the excess plutonium is contained in highly acidic, corrosive solutions that can damage containers. Plutonium in this form creates a potential for leakage that could, in turn, expose workers to hazards or contaminate the environment. Accordingly, the plutonium in solutions must be stabilized and repackaged. In contrast, the costs of managing national security plutonium are relatively low because this plutonium is generally stored in sealed metal weapons components (pits), is relatively stable, and requires little near-term management, according to DOE officials. For example, at the Pantex Plant, which stores the majority of DOE’s national security plutonium in pits, the plutonium management costs are relatively low. Although the costs of managing excess plutonium are higher than those of managing national security plutonium, the excess plutonium will eventually be converted to safer forms and disposed of in a permanent underground repository. At that time, its management costs will fall. In contrast, the costs of managing national security plutonium will continue as long as the United States requires plutonium for its nuclear weapons. Given that DOE has no plans to reduce its requirements for national security plutonium or to categorize additional amounts as excess, these costs can be expected to continue into the foreseeable future. In addition to the current and near-term costs of managing plutonium from fiscal year 1995 through fiscal year 2002, DOE expects to incur long-term costs, through about 2023, for storing and converting excess plutonium to safer forms that will ultimately be disposed of in a permanent underground repository. On the basis of early conceptual design data and preliminary plans, DOE estimates that these costs will total more than $3 billion. This estimate is based on DOE’s January 1997 record of decision, which details the Department’s plan for storing and converting excess plutonium to forms that are difficult to reuse in nuclear weapons and are suitable for permanent disposal. To convert the excess plutonium to such forms, DOE has decided to pursue a dual-track strategy: burning the plutonium in reactors and immobilizing it in glass or ceramics. However, uncertainties surrounding both the storage and the conversion parts of DOE’s strategy have unknown cost and schedule implications. While DOE’s recent record of decision focuses on converting the nation’s excess plutonium to safer forms for disposal, DOE must store the plutonium until it can be converted and then store the converted plutonium until a repository is available for its disposal. Currently, neither facilities for converting the plutonium nor a repository for its permanent disposal is available. Until DOE has developed and built conversion facilities, it plans to store the excess plutonium at five sites. DOE estimates that this storage could cost over $1 billion from 2002 through 2019. This estimate includes approximately $140 million for constructing a new storage facility at Savannah River; about $390 million for upgrading, expanding, and operating the facilities at Pantex and Savannah River; and as much as $600 million for operating the storage facilities at Hanford, INEEL, and Los Alamos. After the plutonium is converted, DOE plans to store the canisters of immobilized plutonium and the spent fuel at the conversion facilities until a permanent repository is available for their final disposal. DOE’s dual-track strategy calls for the use of two different technologies to convert the plutonium into safer forms that meet the “spent fuel standard.” This standard requires that the plutonium be made as inaccessible and unattractive for use in nuclear weapons as the plutonium in spent fuel from commercial nuclear power reactors. One of the conversion tracks involves immobilizing plutonium in either glass or ceramic material within small containers. These containers are placed inside large stainless steel canisters, which are then filled with glass containing high-level waste to provide a radiation barrier. The other track converts plutonium into spent fuel by burning it as fuel in existing commercial reactors. The plutonium is first processed into plutonium dioxide, which is then mixed with uranium dioxide to make mixed oxide (MOX) fuel. The MOX fuel is then burned in a commercial reactor to generate electricity. Regardless of the conversion track used, the end product will meet the spent fuel standard and will ultimately require disposal in a permanent underground repository. Figure 2 illustrates DOE’s storage and conversion strategy. In addition to over $1 billion in storage costs, DOE estimates that implementing its dual-track conversion strategy will cost approximately $2 billion through about 2023. (See app. II for more information on DOE’s schedule estimates for conversion.) This cost estimate reflects both investment and operating costs. Investment costs cover research and development, licensing, conceptual design, start-up, engineering, capital equipment, and construction. Operating costs cover staffing, maintenance, consumables, waste management, and decontamination and decommissioning. Also, in estimating the MOX fuel costs, DOE assumed that some costs could be recovered when reactor operators acquire MOX fuel from DOE instead of purchasing conventional reactor fuel. DOE refers to these recovered costs as fuel displacement credits. Table 4 presents a breakdown of DOE’s cost estimate for the conversion strategy. Although DOE has developed a strategy for storing and converting excess plutonium, this strategy is subject to uncertainties that will affect its implementation. These uncertainties are associated with technology, facility, and nonproliferation issues. How these uncertainties are resolved will determine whether DOE uses one or both of the conversion technologies, how much plutonium will be converted through either technology, and how long the plutonium will have to be stored before and after conversion. Uncertainties are associated with developing the immobilization technology and implementing the MOX fuel technology in the United States. Neither technology has yet been proved effective for use in DOE’s conversion strategy, and both pose issues that must be addressed prior to implementation: Although immobilization has been used for other industrial purposes and other materials, it has never been used on an industrial scale for plutonium. Unresolved questions include how the plutonium will react in the immobilization processing, how stable and durable the immobilized material will be, how difficult recovering the plutonium from the immobilized forms will be, and what percentage of plutonium will be immobilized in glass or ceramics. MOX fuel technology is more advanced and has been used in reactors in other countries for many years. However, MOX fuel is not currently being used in reactors in the United States, no U.S. reactors are licensed to use this fuel, and no MOX fuel fabrication facilities exist in the United States. Additional uncertainties surrounding the MOX technology include the percentage of plutonium that will be used in the U.S. MOX fuel (likely to differ from the percentage used in the European MOX fuel) and the potential effects, on the fuel or reactors, of materials that were added to the plutonium used in weapons components. In addition to fully developing and implementing the two technologies and addressing these uncertainties, DOE must demonstrate the technologies’ compliance with regulatory and oversight requirements. Because both conversion technologies are relatively immature and uncertainties surround their development and implementation, DOE cannot confidently forecast how long it will have to store the excess plutonium before conversion facilities are available. Under DOE’s plans, the consolidation and storage of plutonium will be complete in about 2019 at Pantex, about 2011 at Savannah River, and as early as 2006 at the three remaining sites—Hanford, INEEL, and Los Alamos. Delays in conversion would extend the time the plutonium would have to be stored at some or all of the storage sites. Questions about facilities also pose uncertainties, most of which stem from the immaturity of the conversion technologies. That is, until the technologies are further developed, DOE cannot decide on the type and number of facilities it will need for immobilization. Furthermore, DOE has not yet decided where to place the facilities that will be required to process the plutonium, whether for immobilization or for use in MOX fuel. Similarly, DOE has not determined the type, number, or locations of the commercial reactors that will be needed to burn the MOX fuel. Resolving these issues will depend not only on the maturation of the conversion technologies but also on such things as contract negotiations with reactor owners, licensing requirements, and environmental reviews. Further uncertainties are associated with the underground repository where DOE plans to permanently dispose of converted plutonium. Although DOE assumes that a permanent repository will be ready to accept the converted plutonium in 2010 (12 years later than originally expected), DOE cannot be certain that a repository will open on schedule. DOE is currently assessing the Yucca Mountain site to determine its viability for a repository. In January 1997, we reported that several impediments and uncertainties about standards and licensing must be resolved in order for DOE to achieve its revised 2010 opening date. If a repository is not available, the converted plutonium will have to remain in storage at the conversion facilities and the costs of storage will increase. DOE faces uncertainties concerning nonproliferation issues. DOE’s conversion strategy was designed, in part, to support U.S. nonproliferation goals. The United States is beginning to implement the dual-track conversion strategy to set an example for Russia and encourage it to take similar actions. However, according to DOE, the schedule for converting the excess U.S. plutonium depends on reaching agreements with Russia concerning reductions of its stockpiles of excess plutonium. To date, no such agreements have been finalized. These agreements will also influence the extent to which DOE relies on each of the two conversion strategies. The United States has taken important steps to reduce the dangers of nuclear proliferation associated with holding excess plutonium. However, how accurately DOD and DOE determine the amount of plutonium needed for national security and how much DOE designates as excess may have important long-term implications. Without a technical basis for its categorizations, we believe that the United States cannot be certain it is retaining the correct amount of plutonium for national security purposes. Potential impacts of not holding the correct amount include the following: DOD relies on DOE to provide enough plutonium to support the nuclear stockpile. Without a technical analysis of the amounts required for each of the national security subcategories, DOE cannot ensure that it is holding the correct amount of plutonium to provide this support. Conversely, if DOD and DOE are holding more plutonium than is needed for national security, they may not be fully implementing U.S. policies to reduce existing stockpiles of excess weapons-usable plutonium as quickly as practicable. Within DOE, plans and budgets depend on how plutonium is categorized. DOE’s plan for the long-term storage and management of national security plutonium is based on current allocations to that category. Similarly, DOE’s plan for storing and converting excess plutonium relies on the amount categorized as excess. A change in the amount of plutonium allocated to either category could affect DOE’s projected costs and schedules for both. We provided a draft of this report to DOE, NWC, and DOD for their review and comment. While NWC declined to comment on this report, DOD, as a component of NWC, provided comments on the draft. Although DOE and DOD generally agreed that the information in the report was accurate, they disagreed with our position that a technical basis is lacking for the allocations of national security plutonium for reliability replacement warheads in the inactive nuclear weapons stockpile and for the strategic reserve. In its response to our draft report, DOE noted that the requirements for reliability replacement warheads and for the strategic reserve are prescribed by DOD. DOE also expressed “high confidence the nuclear force structure, as specified by DOD, is based on solid technical analysis and is consistent with legislation, treaties, and policy decisions.” (See app. IV for DOE’s comments.) To follow up on DOE’s written comments, we asked the Director of the Office of Nuclear Weapons Management, Defense Programs, to clarify the Department’s reference to a “solid technical analysis.” While agreeing that DOE could not demonstrate that such an analysis had been conducted for the allocations of plutonium for reliability replacement warheads and for the strategic reserve, he maintained that these allocations are based on prudence and expertise. The Director clarified that the reference to a “solid technical analysis” pertained to the allocations for warheads in the active stockpile, not to the allocations for reliability replacement warheads and for the strategic reserve. As indicated on pages 8 and 9 of this report, we did not question the technical basis for the allocations of plutonium for the active stockpile. In response to our draft report, the Deputy Assistant to the Secretary of Defense (Nuclear Matters) stated that DOD disagreed with our position that the plutonium allocations for reliability replacement warheads and for the strategic reserve lack a technical basis. (See app. V for DOD’s comments.) DOD said that the number of nuclear warheads for reliability replacement and the quantity of plutonium for the strategic reserve are documented in the Nuclear Weapons Stockpile Memorandum and the Long Range Planning Assessment. We agree that these documents specify the amounts of plutonium allocated to these two categories, but these documents do not provide the underlying technical analysis used to determine these amounts. Throughout our review, DOE and DOD officials were unable to demonstrate an underlying technical basis, using scientific or engineering methods or data, for the allocations of plutonium for reliability replacement warheads and for the strategic reserve plutonium. These officials told us that the allocations assume a 100-percent failure rate for warheads in the active stockpile. As stated, we believe that a technical analysis is needed to support the reasonableness of this assumption. Therefore, we did not change the content of our report in response to this comment. However, both DOE and DOD provided clarifying comments, which we incorporated into our report as appropriate. To review DOE’s categorization of plutonium and cost estimates for managing plutonium, we interviewed DOE officials, reviewed DOE documents, and analyzed cost data obtained through a survey that we sent to the eight sites responsible for managing most of DOE’s plutonium inventory. We conducted our work from June 1996 through April 1997 in accordance with generally accepted government auditing standards. Detailed information about our scope and methodology appears in appendix III. Please contact me at (202) 512-3841 if you or your staff have any questions. Major contributors to this report are listed in appendix VI. Plutonium (Pu) is primarily a man-made element, produced by irradiating uranium in nuclear reactors. It exists in various forms and grades and is used in nuclear warheads and as fuel in nuclear reactors. Plutonium-239 is fissile and can sustain a nuclear chain reaction, making this isotope suitable for nuclear weapons. Plutonium-240 is more radioactive and generates more heat than plutonium-239. The percentage of plutonium-240 in plutonium material determines whether it is classified as weapons grade (less than 7 percent Pu-240), fuel grade (7 to 19 percent Pu-240), or reactor grade (more than 19 percent Pu-240). Spent nuclear fuel, a by-product of power generation in nuclear reactors, also contains some plutonium but would require extensive reprocessing to be reused in a weapon or reactor. The different forms of plutonium have varying half-lives—for example, plutonium-239 has a half-life of about 24,000 years. The plutonium that the Department of Energy (DOE) produced is held in several physical forms, including metals, oxides, solutions, residues, and scraps. Most of DOE’s plutonium is stored as a metal because, during the production era, plutonium was recycled and purified to metal form for use in nuclear warheads. Plutonium oxide, a fine powder produced when plutonium metal reacts with oxygen, was formed when weapons were manufactured or when plutonium metal was inadvertently exposed to oxygen. Containers holding acidic and corrosive plutonium solutions are vulnerable to leakage. Residues or scraps, the by-products of past weapons production activities, generally contain plutonium in concentrations of less than 10 percent. Throughout the weapons complex, the plutonium in residues and scraps is mixed with over 100 metric tons of other materials and waste. Dangers of Plutonium Although DOE has ceased to manufacture plutonium for use in nuclear weapons, the plutonium produced in the past continues to present hazards. Because plutonium is highly radioactive, it poses acute dangers to human health and the environment, as well as to national security, unless it is properly stored and safeguarded. Land, buildings, equipment, and materials contaminated with plutonium also present environmental hazards that must be cleaned up or contained. When DOE stopped producing nuclear materials, much of its plutonium was improperly stored, posing health, safety, and environmental hazards. If not safely contained and managed, plutonium can be dangerous to human health, even in extremely small quantities. Inhaling a few micrograms of plutonium creates a long-term risk of lung, liver, and bone cancer. Inhaling larger doses can cause immediate lung injuries and death. The potential for exposure occurs when containers or packaging fails to fully contain the plutonium. Leakage from corroded containers or inadvertent accumulations of plutonium dust in piping or duct work present hazards, especially in aging, poorly maintained, or obsolete facilities. After assessing the vulnerabilities associated with its storage of plutonium, DOE began stabilizing, packaging, or repackaging the more unstable forms—including oxides, solutions, residues, and scraps—to properly store them, as well as plutonium metals, while they await disposition. Like uranium, plutonium is a key ingredient in nuclear weapons, and several kilograms suffice to make a nuclear bomb. According to DOE, most nations and some terrorist groups would be able to produce nuclear weapons if they had access to sufficient quantities of nuclear materials. Therefore, controls on access to nuclear materials are the primary technical barrier to nuclear proliferation in the world today. Several thefts of weapons-usable nuclear materials in the former Soviet Union have been confirmed since the end of the Cold War, leading the Director of the Central Intelligence Agency to warn that these materials are more available now than ever before in history. To help reduce the risk of nuclear proliferation posed by plutonium and other nuclear materials, the United States and Russia are working towards nuclear arms reduction treaties. Agreements such as the Strategic Arms Reduction Treaties (START-I and START-II) require that weapons be retired from deployed status and their delivery systems be removed or destroyed. These treaties do not, however, require that the nuclear warheads be dismantled or that their parts and materials, including plutonium, be destroyed. The United States has nevertheless removed some weapons from its stockpile, dismantled their warheads, and stored or disposed of their components and key nuclear materials. In addition, through a “lead and hedge” approach, the United States is encouraging Russia to reduce both the number of nuclear warheads in its arsenal and the amount of nuclear material it maintains to support these warheads. Specifically, the United States plans to “lead” the Russians by reducing the U.S. arsenal of strategic warheads, as agreed in the START-II treaty. At the same time, it plans to “hedge” by maintaining its ability to return to the levels established under START-I, should the need for additional warheads arise. Although the Congress ratified START-II in January 1996, the Russian parliament has not yet scheduled a vote on it. Because of Russia’s delay in ratifying START-II, the Department of Defense (DOD) is evaluating its ability to resume START-I levels of nuclear warheads in the active stockpile. Now that DOE is no longer producing plutonium for nuclear weapons, it is changing its focus to cleaning up the environmental contamination created by 50 years of production at its facilities. In its consolidated financial statements for fiscal year 1996, DOE estimated that it will spend nearly $229 billion over the next 75 years to clean up sites where plutonium and other nuclear materials were fabricated and used to produce nuclear weapons. DOE has not determined what portion of these costs can be attributed specifically to plutonium or plutonium-related activities. Assuming a 1997 start date, DOE estimates the conversion mission will end in 2023. DOE’s estimate breaks the schedule into four overlapping activities: (1) preparing the plutonium for conversion, (2) immobilizing the plutonium, (3) fabricating mixed oxide (MOX) fuel, and (4) burning the MOX fuel in reactors. Figure II.1 shows the schedule for these four activities. Figure II.1: DOE’s Schedule for Implementing Its Dual-Track Conversion Strategy 97 98 99 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 Preoperational activities include research and development and engineering; licensing, permitting and siting; modifications; and selecting a utility or utilities to operate the reactor(s) that will burn the MOX fuel. The last MOX fuel assembly will achieve the spent fuel standard in about 2020, although irradiation of the fuel will continue into 2023. Our objectives for this assignment were to (1) review how much plutonium the United States allocated for national security, how much was designated as excess, and how DOE determined these amounts; (2) review DOE’s estimates of the current and near-term costs for managing plutonium; and (3) review DOE’s estimates of the long-term costs for managing plutonium. To review DOE’s and the Nuclear Weapons Council’s (NWC) categorization of plutonium and any changes that have occurred or are projected for the future, we interviewed DOE and NWC officials and gathered and analyzed information from both organizations. As agreed with the requester’s office, our study did not include DOD’s roles and activities except to the extent that DOD participates in NWC. Therefore, although DOD manages the plutonium contained in active nuclear warheads, we did not include the cost of managing this plutonium. To determine the current and near-term costs of managing DOE’s plutonium, we interviewed officials and gathered and analyzed data from DOE sites and headquarters. We conducted a survey of the eight DOE sites that, according to DOE’s 1996 report Plutonium: The First 50 Years, maintain the majority of DOE’s plutonium inventory. These sites are Argonne National Laboratory-West, Hanford Site, Idaho National Engineering and Environmental Laboratory, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, Pantex Plant, Rocky Flats Environmental Technology Site, and Savannah River Site. The survey asked each site to identify its (1) actual costs for fiscal years 1995 and 1996, (2) budget estimates for fiscal year 1997, and (3) projected cost estimates for fiscal year 1998 through fiscal year 2002. All cost estimates were adjusted to constant 1996 dollars. We also included each site’s share of the program oversight costs incurred by DOE headquarters and operations offices, applying DOE’s own standard formula (4.3 percent plus local adjustments) to the cost estimate provided by each site. DOE’s budget and accounting systems do not separately collect or report plutonium-specific costs. Therefore, DOE provided its “best estimates” of plutonium-related costs, based on available cost information as well as officials’ technical expertise and professional judgment. We could not readily verify the data’s accuracy, as we would have done had the data been derived from a budget and accounting system. However, we discussed our data-gathering approach with cognizant DOE officials, coordinated our request for data through the Office of the Chief Financial Officer, and provided our summarized cost data to DOE officials, who agreed that the data-gathering approach was reasonable and that the data provided by the field sites were probably the best that could be obtained under the circumstances. Similarly, officials from the Congressional Research Service and Congressional Budget Office reviewed the cost data and suggested no changes. To obtain information on the long-term costs of managing plutonium, we interviewed DOE officials and examined various DOE documents, including the Record of Decision for the Storage and Disposition of Weapons-Usable Fissile Materials Final Programmatic Environmental Impact Statement and documents prepared to support it. In addition, we reviewed DOE’s consolidated financial statements for fiscal year 1996. We conducted our review between June 1996 and April 1997 in accordance with generally accepted government auditing standards. Department of Energy Contract Management (GAO/HR-97-13, Feb. 1997). Nuclear Waste: Impediments to Completing the Yucca Mountain Repository Project (GAO/RCED-97-30, Jan. 17, 1997). Nuclear Waste: Uncertainties About Opening Waste Isolation Pilot Plant (GAO/RCED-96-146, July 16, 1996). Nuclear Waste: Greater Use of Removal Actions Could Cut Time and Cost for Cleanups (GAO/RCED-96-124, May 23, 1996). Energy Downsizing: While DOE is Achieving Budget Cuts, It Is Too Soon to Gauge Effects (GAO/RCED-96-154, May 13, 1996). Nuclear Weapons: Status of DOE’s Nuclear Stockpile Surveillance Program (GAO/T-RCED-96-100, Mar. 13, 1996). Nuclear Nonproliferation: U.S. Efforts to Help Newly Independent States Improve Their Nuclear Material Controls (GAO/T-NSIAD/RCED-96-118, Mar. 13, 1996). Nuclear Nonproliferation: Status of U.S. Efforts to Improve Nuclear Material Controls in Newly Independent States (GAO/NSIAD/RCED-96-89, Mar. 8, 1996). Nuclear Nonproliferation: Concerns With the U.S. International Nuclear Materials Tracking System (GAO/T-RCED/AIMD-96-91, Feb. 28, 1996). Nuclear Waste: Management and Technical Problems Continue to Delay Characterizing Hanford’s Tank Waste (GAO/RCED-96-56, Jan. 26, 1996). Nuclear Safety: Concerns With Nuclear Facilities and Other Sources of Radiation in the Former Soviet Union (GAO/RCED-96-4, Nov. 7, 1995). Nuclear Waste: Issues Affecting the Opening of DOE’s Waste Isolation Pilot Plant (GAO/T-RCED-95-254, July 21, 1995). Department of Energy: Savings From Deactivating Facilities Can Be Better Estimated (GAO/RCED-95-183, July 7, 1995). Nuclear Nonproliferation: Information on Nuclear Exports Controlled by U.S.-EURATOM Agreement (GAO/RCED-95-168, June 16, 1995). Nuclear Facility Cleanup: Centralized Contracting of Laboratory Analysis Would Produce Budgetary Savings (GAO/RCED-95-118, May 8, 1995). Nuclear Materials: Plutonium Storage at DOE’s Rocky Flats Plant (GAO/RCED-95-49, Dec. 29, 1994). Nuclear Waste: Change in Test Strategy Sound, but DOE Overstated Savings (GAO/RCED-95-44, Dec. 27, 1994). Nuclear Waste: DOE’s Management and Organization of the Nevada Repository Project (GAO/RCED-95-27, Dec. 23, 1994). Nuclear Waste: Comprehensive Review of the Disposal Program Is Needed (GAO/RCED-94-299, Sept. 27, 1994). Nuclear Waste: Yucca Mountain Project Behind Schedule and Facing Major Scientific Uncertainties (GAO/RCED-93-124, May 21, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Energy's (DOE) management of its plutonium inventory, focusing on: (1) how much plutonium the United States allocated for national security needs, how much it designated as excess, and how DOE determined these amounts; (2) DOE's estimates of the current and near-term costs for managing plutonium; and (3) DOE's estimates of the long-term costs for managing plutonium. GAO noted that: (1) the United States allocated 46.8 metric tons of its 99.5 metric-ton plutonium inventory for national security purposes and designated the remaining 52.7 metric tons as excess; (2) to determine how much plutonium was needed for national security, DOE reviewed its plutonium inventory database; (3) in general, the plutonium in the custody of the Department of Defense and some of the plutonium managed by DOE's Defense Programs (the organization responsible for supporting the nation's nuclear weapons) was categorized as needed for national security purposes; (4) the remaining plutonium managed by Defense Programs and other DOE organizations was categorized as excess to national security needs and will ultimately be disposed of; (5) the national security plutonium is further divided into several subcategories; (6) DOE has a technical basis to support the need for the amounts of plutonium it holds in most of its national security needs, but not for the plutonium it holds for reliability replacement warheads and strategic reserves; (7) from fiscal year (FY) 1995 through FY 2002, DOE expects to spend about $18.8 billion on plutonium management and related activities; (8) these costs consist of about $10.5 billion for plutonium inventory management activities, including approximately $1.8 billion for national security plutonium and $8.7 billion for excess plutonium; (9) DOE expects to spend another $8.3 billion for plutonium-related waste management and site cleanup activities; (10) the costs of managing excess plutonium are about four times greater than the costs of managing national security plutonium because much of the excess plutonium is held in unstable forms and requires special management activities, such as handling, processing, and packaging; (11) national security plutonium is generally contained in more stable forms, such as metals and weapons components, and therefore requires less management; (12) DOE also expects to spend over $3 billion for longer-term plutonium storage and conversion activities through about 2023; (13) this estimate is based on DOE's plans for storing the excess plutonium and converting it to forms that will make it more difficult to reuse in nuclear weapons; and (14) however, DOE's cost and schedule estimates are subject to many uncertainties, a number of which stem from the relative immaturity of the planned conversion technologies.
FDA is responsible for overseeing the safety and effectiveness of medical devices that are marketed in the United States, whether manufactured in domestic or foreign establishments. All establishments that manufacture medical devices for marketing in the United States must register with FDA. As part of its efforts to ensure the safety, effectiveness, and quality of medical devices, FDA is responsible for inspecting certain domestic and foreign establishments to ensure that they meet manufacturing standards established in FDA’s quality system regulation. FDA does not have authority to require foreign establishments to allow the agency to inspect their facilities. However, FDA has the authority to prevent the importation of products manufactured at establishments that refuse to allow an FDA inspection. Unlike food, for which FDA primarily relies on inspections at the border, physical inspection of manufacturing establishments is a critical mechanism in FDA’s process to ensure that medical devices and drugs are safe and effective and that manufacturers adhere to good manufacturing practices. Within FDA, CDRH assures the safety and effectiveness of medical devices. Among other things, CDRH works with ORA, which conducts inspections of both domestic and foreign establishments to ensure that devices are produced in conformance with federal statutes and regulations, including the quality system regulation. FDA may conduct inspections before and after medical devices are approved or otherwise cleared to be marketed in the United States. Premarket inspections are conducted before FDA will approve U.S. marketing of a new medical device that is not substantially equivalent to one that is already on the market. Premarket inspections primarily assess manufacturing facilities, methods, and controls and may verify pertinent records. Postmarket inspections are conducted after a medical device has been approved or otherwise cleared to be marketed in the United States and include several types of inspections: (1) Quality system inspections are conducted to assess compliance with applicable FDA regulations, including the quality system regulation to ensure good manufacturing practices and the regulation requiring reporting of adverse events. These inspections may be comprehensive or abbreviated, which differ in the scope of inspectional activity. Comprehensive postmarket inspections assess multiple aspects of the manufacturer’s quality system, including management controls, design controls, corrective and preventative actions, and production and process controls. Abbreviated postmarket inspections assess only some of these aspects, but always assess corrective and preventative actions. (2) For-cause and compliance follow- up inspections are initiated in response to specific information that raises questions or problems associated with a particular establishment. (3) Postmarket audit inspections are conducted within 8 to 12 months of a premarket application’s approval to examine any changes in the design, manufacturing process, or quality assurance systems. FDA determines which establishments to inspect using a risk-based strategy. High priority inspections include premarket approval inspections for class III devices, for-cause inspections, inspections of establishments that have had a high frequency of device recalls, and other devices and manufacturers FDA considers high risk. The establishment’s inspection history may also be considered. A provision in FDAAA may assist FDA in making decisions about which establishments to inspect because it authorizes the agency to accept voluntary submissions of audit reports addressing manufacturers’ conformance with internationally established standards for the purpose of setting risk-based inspectional priorities. FDA’s programs for domestic and foreign inspections by accredited third parties provide an alternative to the traditional FDA-conducted comprehensive postmarket quality system inspection for eligible manufacturers of class II and III medical devices. MDUFMA required FDA to accredit third persons—which are organizations—to conduct inspections of certain establishments. In describing this requirement, the House of Representatives Committee on Energy and Commerce noted that some manufacturers have faced an increase in the number of inspections required by foreign countries, and that the number of inspections could be reduced if the manufacturers could contract with a third-party organization to conduct a single inspection that would satisfy the requirements of both FDA and foreign countries. Manufacturers that meet eligibility requirements may request a postmarket inspection by an FDA-accredited organization. The eligibility criteria for requesting an inspection of an establishment by an accredited organization include that the manufacturer markets (or intends to market) a medical device in a foreign country and the establishment to be inspected must not have received warnings for significant deviations from compliance requirements on its last inspection. MDUFMA also established minimum requirements for organizations to be accredited to conduct third-party inspections, including protecting against financial conflicts of interest and ensuring the competence of the organization to conduct inspections. FDA developed a training program for inspectors from accredited organizations that involves both formal classroom training and completion of three joint training inspections with FDA. Each individual inspector from an accredited organization must complete all training requirements successfully before being cleared to conduct independent inspections. FDA relies on manufacturers to volunteer to host these joint inspections, which count as FDA postmarket quality system inspections. A manufacturer that is cleared to have an inspection by an accredited third party enters an agreement with the approved accredited organization and schedules an inspection. Once the accredited organization completes its inspection, it prepares a report and submits it to FDA, which makes the final assessment of compliance with applicable requirements. FDAAA added a requirement that accredited organizations notify FDA of any withdrawal, suspension, restriction, or expiration of certificate of conformance with quality systems standards (such as those established by the International Organization for Standardization) for establishments they inspected for FDA. In addition to the Accredited Persons Inspection Program, FDA has a second program for accredited third-party inspections of medical device establishments. On September 7, 2006, FDA and Health Canada announced the establishment of PMAP. This pilot program was designed to allow qualified third-party organizations to perform a single inspection that would meet the regulatory requirements of both the United States and Canada. The third-party organizations eligible to conduct inspections through PMAP are those that FDA accredited for its Accredited Persons Inspection Program (and that completed all required training for that program) and that are also authorized to conduct inspections of medical device establishments for Health Canada. To be eligible to have a third- party inspection through PMAP, manufacturers must meet all criteria established for the Accredited Persons Inspection Program. As with the Accredited Persons Inspection Program, manufacturers must apply to participate and be willing to pay an accredited organization to conduct the inspection. FDA relies on multiple databases to manage its program for inspecting medical device manufacturing establishments. DRLS contains information on domestic and foreign medical device establishments that have registered with FDA. Establishments that are involved in the manufacture of medical devices intended for commercial distribution in the United States are required to register annually with FDA. These establishments provide information to FDA, such as establishment name and address and the medical devices they manufacture. As of October 1, 2007, establishments are required to register electronically through FDA’s Unified Registration and Listing System and certain medical device establishments pay an annual establishment registration fee, which in fiscal year 2008 is $1,706. OASIS contains information on medical devices and other FDA-regulated products imported into the United States, including information on the establishment that manufactured the medical device. The information in OASIS is automatically generated from data managed by U.S. Customs and Border Protection, which are originally entered by customs brokers based on the information available from the importer. FACTS contains information on FDA’s inspections, including those of domestic and foreign medical device establishments. FDA investigators enter information into FACTS following completion of an inspection. According to FDA data, more than 23,600 establishments that manufacture medical devices were registered as of September 2007, of which 10,600 reported that they manufacture class II or III medical devices. More than half—about 5,600—of these establishments were located in the United States. As of September 2007, there were more registered establishments in China and Germany reporting that they manufacture class II or III medical devices than in any other foreign countries. Canada, Taiwan, and the United Kingdom also had a large number of registered establishments. (See fig. 1.) Registered foreign establishments reported that they manufacture a variety of class II and III medical devices for the U.S. market. For example, common class III medical devices included coronary stents, pacemakers, and contact lenses. FDA has not met the statutory requirement to inspect domestic establishments manufacturing class II or III medical devices every 2 years. The agency conducted relatively few inspections of foreign establishments. The databases that provide FDA with data about the number of foreign establishments manufacturing medical devices for the U.S. market contain inaccuracies. In addition, inspections of foreign medical device manufacturing establishments pose unique challenges to FDA—both in human resources and logistics. From fiscal year 2002 through fiscal year 2007, FDA primarily inspected establishments located in the United States, where more than half of the 10,600 registered establishments that reported manufacturing class II or III medical devices are located. In contrast, FDA inspected relatively few foreign medical device establishments. During this period, FDA conducted an average of 1,494 domestic and 247 foreign establishment inspections each year. This suggests that each year FDA inspects about 27 percent of registered domestic establishments that reported manufacturing class II or class III medical devices and about 5 percent of such foreign establishments. The inspected establishments were in the United States and 44 foreign countries. Of the foreign inspections, more than two-thirds were in 10 countries. Most of the countries with the highest number of inspections were also among those with the largest number of registered establishments that reported manufacturing class II or III medical devices. The lowest rate of inspections in these 10 countries was in China, where 64 inspections were conducted in this 6-year period and almost 700 establishments were registered. (See table 1.) Despite its focus on domestic inspections, FDA has not met the statutory requirement to inspect domestic establishments manufacturing class II or III medical devices every 2 years. For domestic establishments, FDA officials estimated that, on average, the agency inspects class II manufacturers every 5 years and class III manufacturers every 3 years. For foreign establishments—for which there is no comparable inspection requirement—FDA officials estimated that the agency inspects class II manufacturers every 27 years and class III manufacturers every 6 years. FDA’s inspections of medical device establishments, both domestic and foreign, are primarily postmarket inspections. While premarket inspections are generally FDA’s highest priority, relatively few have to be performed in any given year. Therefore, FDA focuses its resources on postmarket inspections. From fiscal year 2002 through fiscal year 2007, 95 percent of the 8,962 domestic establishment inspections and 89 percent of the 1,481 foreign establishment inspections were for postmarket purposes. (See fig. 2.) FDA’s databases on registration and imported products provide divergent estimates regarding the number of foreign medical device manufacturing establishments. DRLS provides FDA with information about domestic and foreign medical device establishments and the products they manufacture for the U.S. market. According to DRLS, as of September 2007, 5,616 domestic and 4,983 foreign establishments that reported manufacturing a class II or III medical device for the U.S. market had registered with FDA. However, these data contain inaccuracies because establishments may register with FDA but not actually manufacture a medical device or may manufacture a medical device that is not marketed in the United States. FDA officials told us that their more frequent inspections of domestic establishments allow them to more easily update information about whether a domestic establishment is subject to inspection. In addition to DRLS, FDA obtains information on foreign establishments from OASIS, which tracks the import of medical devices. While not intended to provide a count of establishments, OASIS does contain information about the medical devices actually being imported into the United States and the establishments manufacturing them. However, inaccuracies in OASIS prevent FDA from using it to develop a list of establishments subject to inspection. OASIS contains duplicate records for a single establishment because of inaccurate data entry by customs brokers at the border. According to OASIS, in fiscal year 2007, there were as many as 22,008 foreign establishments that manufactured class II medical devices for the U.S. market and 3,575 foreign establishments that manufactured class III medical devices for the U.S. market. Despite the divergent estimates of foreign establishments generated by DRLS and OASIS, FDA does not routinely verify the data within each database. Although comparing information from these two databases could help FDA determine the number of foreign establishments marketing medical devices in the United States, the databases cannot exchange information to be compared electronically and any comparisons are done manually. Efforts are underway that could improve FDA’s databases. FDA officials suggested that, because manufacturers are now required to pay an annual establishment registration fee, manufacturers may be more concerned about the accuracy of the registration data they submit. They also told us that, because of the registration fee, manufacturers may be less likely to register if they do not actually manufacture a medical device for the U.S. market. In addition, FDA officials stated that the agency is pursuing various initiatives to try to address the inaccuracies in OASIS, such as providing a unique identifier for each foreign establishment to reduce duplicate entries for individual establishments. Inspections of foreign establishments pose unique challenges to FDA— both in human resources and logistics. FDA does not have a dedicated cadre of investigators that only conduct foreign medical device establishment inspections; those staff who inspect foreign establishments also inspect domestic establishments. Among those qualified to inspect foreign establishments, FDA relies on staff to volunteer to conduct inspections. FDA officials told us that it is difficult to recruit investigators to voluntarily travel to certain countries. However, they added that if the agency could not find an individual to volunteer for a foreign inspection trip, it would mandate the travel. Logistically, foreign medical device establishment inspections are difficult to extend even if problems are identified because the trips are scheduled in advance. Foreign medical device establishment inspections are also logistically challenging because investigators do not receive independent translational support from FDA or the State Department and may rely on English-speaking employees of the inspected establishment or the establishment’s U.S. agent to translate during an inspection. Few inspections of medical device manufacturing establishments have been conducted through FDA’s two accredited third-party inspection programs—the Accredited Persons Inspection Program and PMAP. FDAAA specified several changes to the requirements for inspections by accredited third parties that could result in increased participation by manufacturers. Few inspections have been conducted through FDA’s Accredited Persons Inspection Program since March 11, 2004—the date when FDA first cleared an accredited organization to conduct independent inspections. Through January 11, 2008, five inspections had been conducted independently by accredited organizations (two inspections of domestic establishments and three inspections of foreign establishments), an increase of three since we reported on this program one year ago. As of January 11, 2008, 16 third-party organizations were accredited, and individuals from 8 of these organizations had completed FDA’s training requirements and been cleared to conduct independent inspections. As of January 8, 2008, FDA and accredited organizations had conducted 44 joint training inspections. Fewer manufacturers volunteered to host training inspections than have been needed for all of the accredited organizations to complete their training. Moreover, scheduling these joint training inspections has been difficult. FDA officials told us that, when appropriate, staff are instructed to ask manufacturers to host a joint training inspection at the time they notify the manufacturers of a pending inspection. FDA schedules inspections a relatively short time prior to an actual inspection, and as we reported in January 2007, some accredited organizations have not been able to participate because they had prior commitments. As we reported in January 2007, manufacturers’ decisions to request an inspection by an accredited organization might be influenced by both potential incentives and disincentives. According to FDA officials and representatives of affected entities, potential incentives to participation include the opportunity to reduce the number of inspections conducted to meet FDA and other countries’ requirements. For example, one inspection conducted by an accredited organization was a single inspection designed to meet the requirements of FDA, the European Union, and Canada. Another potential incentive mentioned by FDA officials and representatives of affected entities is the opportunity to control the scheduling of the inspection by an accredited organization by working with the accredited organization. FDA officials and representatives of affected entities also mentioned potential disincentives to having an inspection by an accredited organization. These potential disincentives include bearing the cost for the inspection, doubts about whether accredited organizations can cover multiple requirements in a single inspection, and uncertainty about the potential consequences of an inspection that otherwise may not occur in the near future—consequences that could involve regulatory action. Changes specified by FDAAA have the potential to eliminate certain obstacles to manufacturers’ participation in FDA’s programs for inspections by accredited third parties that were associated with manufacturers’ eligibility. For example, an eligibility requirement that foreign establishments be periodically inspected by FDA was eliminated. Representatives of the two organizations that represent medical device manufacturers with whom we spoke about FDAAA told us that the changes in eligibility requirements could eliminate certain obstacles and therefore potentially increase their participation. These representatives also noted that key incentives and disincentives to manufacturers’ participation remain. FDA officials told us that they are currently revising their guidance to industry in light of FDAAA and expect to issue the revised guidance during fiscal year 2008. It is too soon to tell what impact these changes will have on manufacturers’ participation. FDA officials acknowledged that manufacturers’ participation in the Accredited Persons Inspection Program has been limited. In December 2007, FDA established a working group to assess the successes and failures of this program and to identify ways to increase participation. Representatives of the two organizations that represent medical device manufacturers with whom we recently spoke stated that they believe manufacturers remain interested in the Accredited Persons Inspection Program. The representative of one large, global manufacturer of medical devices told us that it is in the process of arranging to have 20 of its domestic and foreign device manufacturing establishments inspected by accredited third parties. As of January 11, 2008, two inspections, both of domestic establishments, had been conducted through PMAP, FDA’s second program for inspections by accredited third parties. Although it is too soon to tell what the benefits of PMAP will be, the program is more limited than the Accredited Persons Inspection Program and may pose additional disincentives to participation by both manufacturers and accredited organizations. Specifically, inspections through PMAP would be designed to meet the requirements of the United States and Canada, whereas inspections conducted through the Accredited Persons Inspection Program could be designed to meet the requirements of other countries. In addition, two of the five representatives of affected entities noted that in contrast to inspections conducted through the Accredited Persons Inspection Program, inspections conducted through PMAP could undergo additional review by Health Canada. Health Canada will review inspection reports submitted through this pilot program to ensure they meet its standards. This extra review poses a greater risk of unexpected outcomes for the manufacturer and the accredited organization, which could be a disincentive to participation in PMAP that is not present with the Accredited Persons Inspection Program. Americans depend on FDA to ensure the safety and effectiveness of medical products, including medical devices, manufactured throughout the world. However, our findings regarding inspections of medical device manufacturers indicate weaknesses that mirror those presented in our November 2007 testimony regarding inspections of foreign drug manufacturers. In addition, they are consistent with the FDA Science Board’s findings that FDA’s ability to fulfill its regulatory responsibilities is jeopardized, in part, by information technology and human resources challenges. We recognize that FDA has expressed the intention to improve its data management, but it is too early to tell whether the intended changes will ultimately enhance the agency’s ability to manage its inspection programs. We and others have suggested that the use of accredited third parties could improve FDA’s ability to meet its inspection responsibilities. However, the implementation of its programs for inspecting medical device manufacturers has resulted in little progress. To date, its programs for inspections by accredited third parties have not assisted FDA in meeting its regulatory responsibilities nor have they provided a rapid or substantial increase in the number of inspections performed by these organizations, as originally intended. Although recent statutory changes to the requirements for inspections by accredited third parties may encourage greater participation in these programs, the lack of meaningful progress raises questions about the practicality and effectiveness of establishing similar programs that rely on third parties to quickly help FDA fulfill other responsibilities. Mr. Chairman, this completes my prepared statement, I would be happy to respond to any questions you or the other Members of the subcommittee may have at this time. For further information about this testimony, please contact Marcia Crosse at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may found on the last page of this testimony. Geraldine Redican-Bigott, Assistant Director; Kristen Joan Anderson; Katherine Clark; Robert Copeland; William Hadley; Cathy Hamann; Mollie Hertel; Julian Klazkin; Lisa Motley; Daniel Ries; and Suzanne Worth made key contributions to this testimony. In congressional testimony in November 2007, we presented our preliminary findings on the Food and Drug Administration’s (FDA) program for inspecting foreign drug manufacturers. We found that (1) FDA’s effectiveness in managing the foreign drug inspection program continued to be hindered by weaknesses in its databases; (2) FDA inspected relatively few foreign establishments; and (3) the foreign inspection process involved unique circumstances that were not encountered domestically. Our preliminary findings indicated that more than 9 years after we issued our last report on FDA’s foreign drug inspection program, FDA’s effectiveness in managing this program continued to be hindered by weaknesses in its databases. FDA did not know how many foreign establishments were subject to inspection. Instead of maintaining a list of such establishments, FDA relied on information from several databases that were not designed for this purpose. One of these databases contained information on foreign establishments that had registered to market drugs in the United States, while another contained information on drugs imported into the United States. One database indicated about 3,000 foreign establishments could have been subject to inspection in fiscal year 2007, while another indicated that about 6,800 foreign establishments could have been subject to inspection in that year. Despite the divergent estimates of foreign establishments subject to inspection generated by these two databases, FDA did not verify the data within each database. For example, the agency did not routinely confirm that a registered establishment actually manufactured a drug for the U.S. market. However, FDA used these data to generate a list of 3,249 foreign establishments from which it prioritized establishments for inspection. Because FDA was not certain how many foreign drug establishments were actually subject to inspection, the percentage of such establishments that had been inspected could not be calculated with certainty. We found that FDA inspected relatively few foreign drug establishments, as shown in table 2. Using the list of 3,249 foreign drug establishments from which FDA prioritized establishments for inspection, we found that the agency may inspect about 7 percent of foreign drug establishments in a given year. At this rate, it would take FDA more than 13 years to inspect each foreign drug establishment on this list once, assuming that no additional establishments are subject to inspection. FDA’s data indicated that some foreign drug manufacturers had not received an inspection, but FDA could not provide the exact number of foreign drug establishments that had never been inspected. Most of the foreign drug inspections were conducted as part of processing a new drug application or an abbreviated new drug application, rather than as current good manufacturing practices (GMP) surveillance inspections, which are used to monitor the quality of marketed drugs. FDA used a risk-based process, based in part on data from its registration and import databases, to develop a prioritized list of foreign drug establishments for GMP surveillance inspections in fiscal year 2007. According to FDA, about 30 such inspections were completed in fiscal year 2007, and at least 50 were targeted for inspection in fiscal year 2008. Further, inaccuracies in the data on which this risk-based process depended limited its effectiveness. Finally, the very nature of the foreign drug inspection process involved unique circumstances that were not encountered domestically. For example, FDA did not have a dedicated staff to conduct foreign drug inspections and relied on those inspecting domestic establishments to volunteer for foreign inspections. While FDA may conduct unannounced GMP inspections of domestic establishments, it did not arrive unannounced at foreign establishments. It also lacked the flexibility to easily extend foreign inspections if problems were encountered due to the need to adhere to an itinerary that typically involved multiple inspections in the same country. Finally, language barriers can make foreign inspections more difficult to conduct than domestic ones. FDA did not generally provide translators to its inspection teams. Instead, they may have had to rely on an English-speaking representative of the foreign establishment being inspected, rather than an independent translator. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
As part of the Food and Drug Administration's (FDA) oversight of the safety and effectiveness of medical devices marketed in the United States, it inspects domestic and foreign establishments where these devices are manufactured. To help FDA address shortcomings in its inspection program, the Medical Device User Fee and Modernization Act of 2002 required FDA to accredit third parties to inspect certain establishments. In response, FDA has implemented two such voluntary programs. GAO previously reported on the status of one of these programs, citing concerns regarding its implementation and factors that may influence manufacturers' participation. (Medical Devices: Status of FDA's Program for Inspections by Accredited Organizations, GAO-07-157 , January 2007.) This statement (1) assesses FDA's management of inspections of establishments--particularly those in foreign countries--manufacturing devices for the U.S. market, and (2) provides the status of FDA's programs for third-party inspections of medical device manufacturing establishments. GAO interviewed FDA officials; reviewed pertinent statutes, regulations, guidance, and reports; and analyzed information from FDA databases. GAO also updated its previous work on FDA's programs for inspections by accredited third parties. FDA has not met the statutory requirement to inspect certain domestic establishments manufacturing medical devices every 2 years, and the agency faces challenges inspecting foreign establishments. FDA primarily inspected establishments located in the United States. The agency has not met the biennial inspection requirement for domestic establishments manufacturing medical devices that FDA has classified as high risk, such as pacemakers, or medium risk, such as hearing aids. FDA officials estimated that the agency has inspected these establishments every 3 years (for high risk devices) or 5 years (for medium risk devices). There is no comparable requirement to inspect foreign establishments, and agency officials estimate that these establishments have been inspected every 6 years (for high risk devices) or 27 years (for medium risk devices). FDA faces challenges in managing its inspections of foreign medical device establishments. Two databases that provide FDA with information about foreign medical device establishments and the products they manufacture for the U.S. market contain inaccuracies that create disparate estimates of establishments subject to FDA inspection. Although comparing information from these two databases could help FDA determine the number of foreign establishments marketing medical devices in the United States, these databases cannot exchange information and any comparisons must be done manually. Finally, inspections of foreign medical device manufacturing establishments pose unique challenges to FDA in human resources and logistics. Few inspections of medical device manufacturing establishments have been conducted through FDA's two accredited third-party inspection programs--the Accredited Persons Inspection Program and the Pilot Multi-purpose Audit Program (PMAP). From March 11, 2004--the date when FDA first cleared an accredited organization to conduct independent inspections--through January 11, 2008, five inspections have been conducted by accredited organizations through FDA's Accredited Persons Inspection Program. An incentive to participation in the program is the opportunity to reduce the number of inspections conducted to meet FDA and other countries' requirements. Disincentives include bearing the cost for the inspection, particularly when the consequences of an inspection that otherwise might not occur in the near future could involve regulatory action. The Food and Drug Administration Amendments Act of 2007 made several changes to program eligibility requirements that could result in increased participation by manufacturers. PMAP was established on September 7, 2006, and as of January 11, 2008, two inspections had been conducted by an accredited organization through this program, which is more limited than the Accredited Persons Inspection Program. The small number of inspections completed to date by accredited third-party organizations raises questions about the practicality and effectiveness of establishing similar programs that rely on third parties to quickly help FDA fulfill its responsibilities.
IRS founded the Problem Resolution Program (PRP) in 1976 to provide an independent means of ensuring that taxpayers’ unresolved problems were promptly and properly handled. Initially, PRP units were established in IRS district offices and, in 1979, PRP was expanded to include the service centers. In late 1979, IRS created the position of Taxpayer Ombudsman to head PRP. In 1996, Congress replaced the Ombudsman’s position with what is now the National Taxpayer Advocate. The goals of PRP are consistent with IRS’ mission of providing quality service to taxpayers by helping them meet their tax responsibilities and by applying the tax laws fairly. PRP’s first goal is to assist taxpayers who cannot get their problems resolved through normal IRS channels or who are suffering significant hardships. For example, local advocate offices can expedite tax refunds or stop enforcement actions for taxpayers experiencing significant hardships. During fiscal year 1998, PRP closed more than 300,000 cases, of which about 10 percent involved potential hardships. The second goal of PRP is to determine the causes of taxpayer problems so that systemic causes can be identified and corrected and to propose legislative changes that might help alleviate taxpayer problems. IRS commonly refers to this process as advocacy. The third goal of PRP is to represent the taxpayers’ interests in the formulation of IRS’ policies and procedures. IRS has a taxpayer advocate in each of its 4 regional offices and has local advocates in its 33 district offices, 30 former district offices, and 10 service centers. The National Taxpayer Advocate has responsibility for the overall management of PRP, and regional and local advocates have responsibility for managing PRP at their respective levels. The Office of the Taxpayer Advocate funds the advocate positions; the staff in advocate offices at all levels; and other resources for advocate offices. PRP assistance to taxpayers who cannot get their problems resolved through normal IRS channels is done by employees called caseworkers, who are not part of the Advocate’s Office. They are in IRS’ functional units—mainly customer service, collection, and examination—in the district offices and service centers. Most PRP resources, including caseworkers, are funded by the functions, and about 80 percent of the caseworkers report to functional managers--not local advocates. Some offices, however, had a centralized structure in which PRP casework was done by employees who were funded by the functions, but reported to the local taxpayer advocate. Formerly, regional and local advocates were selected by and reported to the director of the regional, district, or service center office where they worked. However, in response to a requirement in the IRS Restructuring and Reform Act of 1998, regional advocates are now selected by and report to the National Taxpayer Advocate or his or her designee; and local advocates are now selected by and report to regional advocates. Additionally, last October, IRS began moving to a more centralized reporting structure for the caseworkers—in which they would report to local advocates instead of functional management. IRS officially assigned those caseworkers who were already reporting to local advocates—about 20 percent of the caseworkers—to local advocate offices. In addition, IRS is developing an implementation plan to have the remaining 80 percent of the caseworker positions assigned to local advocate offices this year. IRS plans to submit budget requests that reflect these staffing changes by transferring funds for caseworkers to the Advocate’s Office. During fiscal year 1998, the staffing level of the Advocate’s Office increased from 428 to 584 authorized positions. Our survey showed that, as of June 1, 1998, the Advocate’s Office had 508 on-board staff. At the same time, there were about 1,500 functional employees doing PRP casework in IRS’ field offices. Advocate staff worked on, among other things, sensitive cases; cases involving taxpayer hardship; and advocacy work, such as identifying IRS procedures that cause recurring taxpayer problems. Caseworkers worked on resolving individual taxpayer problems as well as participating in some advocacy efforts. During times of high casework levels, many Advocate’s Office staff are required to do casework in addition to their other duties. The first challenge facing IRS and the National Taxpayer Advocate is the need to address staffing and operational issues while ensuring the independence of the Advocate’s Office. Staffing and operational issues, such as resource allocation, training, and staff selection, are commonplace in most organizations. However, dealing with these issues could prove more challenging for the Advocate’s Office because of the need for PRP to be independent from the IRS operations that have been unsuccessful in resolving taxpayers’ problems. Independence—actual and apparent—is important because, among other things, it helps promote taxpayer confidence in PRP. A key staffing and operational issue is developing an implementation plan for bringing all caseworkers into the Advocate’s Office that includes operational mechanisms that will give PRP the potential benefits of both a reliance on the functions and a separate operation. According to IRS officials, having the caseworkers in the functions may have facilitated caseworker training and the handling of workload fluctuations; however, this arrangement may also have led to the perception that PRP was not an independent program. In addition, as we will discuss later, this organizational arrangement may have contributed to some of the other PRP staffing and operational issues. Another, but related, staffing and operational issue is capturing information about resource usage that advocates need to manage PRP. Some local advocates told us that the lack of control over PRP resources, including staff, made it difficult to manage PRP operations. Advocates do not know the full staffing levels or the total cost of resources devoted to PRP, because IRS does not have a standard system to track PRP resources. Instead, each function tracks its resources differently. The absence of this type of management information yields an incomplete picture of program operations, places limitations on decision-making, and hinders the identification of matters requiring management attention. In addition, having this basic program information would improve the National Taxpayer Advocate’s ability to estimate the resources needed in the restructured Advocate’s Office. Providing appropriate training is also an issue. It is important that caseworkers and other staff receive adequate training if they are going to be able to help taxpayers resolve their problems and effectively work on advocacy efforts. Our survey of IRS staff who were doing advocate office work showed that training has been inconsistent throughout the Advocate’s Office and among PRP caseworkers. For example, as of June 1, 1998, more than half of the PRP caseworkers had not completed a formal PRP training course for their current position. Caseworkers should be trained in both functional responsibilities and PRP operations. Functional training, such as training in tax law changes, is important because resolving taxpayer problems requires that caseworkers understand the tax law affecting a particular case. Historically, because caseworkers were usually functional employees, they routinely received training in functional matters. The National Taxpayer Advocate is faced with ensuring that caseworkers continue to receive needed functional training even if they are no longer functional employees. In this regard, the National Taxpayer Advocate is considering whether to implement a cross- functional training program for caseworkers that would provide training in multiple IRS functions. IRS officials told us that this would broaden caseworker skills and might provide faster and more accurate service to taxpayers. Acquiring qualified PRP caseworkers has been an issue. In the past, the quality of caseworkers depended on the office and the function that assigned the caseworkers to PRP. Local advocates told us that they had no assurance that the functions would provide PRP with qualified staff. It is important for the Advocate’s Office to develop mechanisms to ensure that qualified caseworkers are selected so that program goals are met. Once the Advocate’s Office is no longer dependent upon the functions for its staff, it can implement a competitive selection process for PRP caseworkers that should help ensure that it gets the staff it needs. As IRS restructures the Advocate’s Office, it must consider how best to handle workload fluctuations. Over the past 18 months, the Advocate’s Office and PRP’s workloads have increased. Factors that have affected and could continue to affect workload include increased media attention, the introduction of a toll-free telephone number for taxpayers to call PRP, and Problem Solving Days. Historically, PRP has relied on the functions to provide additional staff to cover workload increases. However, as the office is moving toward a structure that would place all caseworkers in the Advocate’s Office, this source of additional caseworkers may no longer be available. Many local advocates told us that it would be difficult to handle workload fluctuations without the traditional ability to obtain additional caseworkers from functional units. Workload increases may also make it necessary for the Advocate’s Office to decide which cases to address with PRP resources. That is, some taxpayers who seek help from PRP may have to be referred to other IRS offices. Local advocates told us that workload increases could compromise PRP’s ability to help taxpayers. For example, an increase in the number of PRP cases could negatively affect the timeliness and quality of PRP casework. IRS has three criteria for deciding what qualifies as a PRP case. The first two criteria are specific—(1) any contact by a taxpayer on the same issue at least 30 days after the initial contact and (2) no response to the taxpayer by a promised date. However, the third criterion—any contact that indicates established systems have failed to resolve the taxpayer problem, or when it is in the best interest of the taxpayer or IRS to resolve the problem in PRP—is broad enough to encompass virtually any taxpayer contact. We understand why the Advocate’s Office would not want to turn away any taxpayer. However, if PRP accepts cases that could be handled elsewhere in IRS, the program could be overburdened, potentially reducing PRP’s ability to help taxpayers who have nowhere else to go to resolve their problems. The second challenge facing IRS and the National Taxpayer Advocate is to strengthen advocacy efforts within the Advocate’s Office. Advocacy efforts are key to the success of the Advocate’s Office because the improvements they generate can reduce the number of taxpayers who ultimately require help from PRP. Ideas for advocacy efforts are generated at the national, regional, and local levels. These efforts are aimed at eliminating deficiencies in IRS’ processes and procedures that cause recurring problems. Through advocacy efforts, the National Taxpayer Advocate can recommend changes to the Commissioner, IRS functions, and Congress to improve IRS operations and address provisions in law that may be causing undue burden to taxpayers. The Advocate’s Office has taken steps to promote advocacy, such as implementing regional advocacy councils and identifying strategies to increase awareness of advocacy within IRS. The Advocate’s Office has encouraged the functions to play a greater role in assisting taxpayers and improving procedures to reduce taxpayer compliance burden. For example, the Advocate’s Office is working with functional management through an executive level group—called the Taxpayer Equity Task Force-- to develop ways to strengthen equity and fairness in tax administration. The Task Force consists of a cross section of executives from IRS’ functions and staff from the Advocate’s Office. It was established to “fast- track” potential administrative changes and legislative proposals recommended to the National Taxpayer Advocate. However, the Advocate’s Office staff and PRP caseworkers told us that they were spending only a minimal amount of time on advocacy. In that regard, our survey showed that as of June 1, 1998, advocates and their staffs were spending about 10 percent of their time on advocacy, and PRP caseworkers were spending less than 1 percent of their time on advocacy. Advocate office staff and PRP caseworkers told us that increased casework limited the time they could spend on advocacy. We understand the need to give priority to casework over advocacy when there is not enough time to do both. The National Taxpayer Advocate’s ability to deal with these competing priorities is hampered, however, by the absence of (1) a systematic and coordinated approach for conducting advocacy efforts and (2) data with which to prioritize potential advocacy work. To provide information on advocacy to field offices, the Advocate’s Office has developed a list of ongoing advocacy projects. However, the list includes only national-level projects; there is no corresponding list of local efforts, even though those efforts could be addressing issues with agencywide implications. Advocacy staff told us that because there is no system for sharing information on local advocacy efforts, there is some duplication of effort among field offices. Additionally, field staff told us that there is no system that provides feedback on the status of advocacy recommendations. For example, in one district, staff told us that they forwarded the same recommendations to the Advocate’s Office over the course of several years but never received feedback on what actions, if any, were taken on those recommendations. The Advocate’s Office also has not identified its top advocacy priorities, and it has no way to determine the actual impact of its advocacy efforts. Without such information, the National Taxpayer Advocate does not know which advocacy efforts have the greatest potential to reduce taxpayers’ compliance burden. The third challenge facing IRS and the National Taxpayer Advocate is to develop performance measures to be used in managing operations and assessing the effectiveness of the Office of the Taxpayer Advocate and PRP. Developing measures of effectiveness is a difficult undertaking for any organization because it requires that management shift its focus away from descriptive information on staffing, activity levels, and tasks completed. Instead, management must focus on the impact its programs have on its customers. Currently, the Advocate’s Office uses four program measures, but they do not produce all of the information needed to assess program effectiveness. The first two measures--the average length of time it takes to process a PRP case and the currency of PRP inventory—describe program activity. While these two measures are useful for some program management decisions, such as the number of staff needed at a specific office, they do not provide information on how effectively PRP is operating. The third measure, PRP case identification and tracking, attempts to determine if potential PRP cases are properly identified from incoming service center correspondence and subsequently worked by PRP. This measure is an important tool to help the National Taxpayer Advocate know whether PRP actually serves those taxpayers who need and qualify for help from the program. However, a recent review of this measure by IRS’ Office of Internal Audit found, among other things, that inconsistent data collection for the measure could affect the integrity and reliability of the measure’s results. Also, the measure is designed for use only at service centers; there is no similar measure for use at district offices, resulting in an incomplete picture of whether taxpayers are being properly identified and subsequently referred to PRP. PRP’s fourth measure—designed to determine the quality of PRP casework—provides some data on program effectiveness. This measure is based on a statistically valid sample of PRP cases and provides the National Taxpayer Advocate with data on timeliness and the technical accuracy of PRP cases. Among other things, selected PRP cases are checked to determine whether the caseworker contacted the taxpayer by a promised date, whether copies of any correspondence with the taxpayer appeared to communicate issues clearly, and whether the taxpayer’s problem appeared to be completely resolved. Caseworkers and advocate staff in the field told us that the quality measure was helpful because the elements that are reviewed provide a checklist for working PRP cases. According to staff, this helps ensure that most cases are worked in a similar manner in accordance with standard elements. The quality measure, however, does not have a customer satisfaction component. The Advocate’s Office is piloting a method for collecting customer satisfaction data, but the results of this effort are unknown. Because IRS does not collect customer satisfaction data from taxpayers who contacted PRP, the National Taxpayer Advocate does not know if taxpayers are satisfied with PRP services or whether taxpayers considered their problems solved. The National Taxpayer Advocate has the formidable task of developing measures that will provide useful data for improving program performance, increasing accountability, and supporting decisionmaking. To be comprehensive, these measures should cover the full range of Advocate Office operations, including taxpayer satisfaction with PRP services and the effectiveness of advocacy efforts in reducing taxpayer compliance burden.
Pursuant to a congressional request, GAO discussed the challenges facing the Internal Revenue Service's (IRS) Office of the Taxpayer Advocate, focusing on IRS' need to: (1) address complex staffing and operational issues within the Advocate's Office; (2) strengthen efforts within the Advocate's Office to determine the causes of taxpayer problems; and (3) develop performance measures that the National Taxpayer Advocate needs to manage operations and measure effectiveness. GAO noted that: (1) IRS and the National Taxpayer Advocate need to address staffing and operational issues while ensuring the independence of the Advocate's Office; (2) a key staffing and operational issue is developing an implementation plan for bringing all caseworkers into the Advocate's Office that includes operational mechanisms that will give the Problem Resolution Program (PRP) the potential benefits of both a reliance on the functions and a separate operation; (3) another staffing and operational issue is capturing information about resource usage that advocates need to manage PRP; (4) providing appropriate training is also an issue; (5) it is important that caseworkers and other staff receive adequate training; (6) caseworkers should be trained in both functional responsibilities and PRP operations; (7) it is important for the Advocate's Office to develop mechanisms to ensure that qualified caseworkers are selected so that the program goals are met; (8) as IRS restructures the Advocate's Office, it must consider how best to handle workload fluctuations; (9) IRS and the National Taxpayer Advocate need to strengthen advocacy efforts within the Advocate's Office; (10) the Advocate's Office has taken steps to promote advocacy, such as implementing regional advocacy councils and identifying strategies to increase awareness of advocacy within IRS; (11) the Advocate's Office has encouraged the functions to play a greater role in assisting taxpayers and improving procedures to reduce taxpayer compliance burden; (12) IRS and the National Taxpayer Advocate need to develop performance measures to be used in managing operations and assessing the effectiveness of the Taxpayer Advocate and PRP; (13) management must focus on the impact its programs have on its customers; (14) the National Taxpayer Advocate has the formidable task of developing measures that will provide useful data for improving program performance, increasing accountability, and supporting decisionmaking; and (15) to be comprehensive, these measures should cover the full range of Advocate Office operations, including taxpayer satisfaction with PRP services and the effectiveness of advocacy efforts in reducing taxpayer compliance burden.
Federal agencies conduct a variety of procurements that are reserved for small business participation (through small business set-aside and sole- source opportunities, hereafter called set-asides). The set-asides can be for small businesses in general or be specific to small businesses meeting additional eligibility requirements in the Service-Disabled Veteran-Owned Small Business Concern (SDVOSBC), Historically Underutilized Business Zone (HUBZone), 8(a) Business Development, and WOSB programs. The WOSB program, which started operating in 2011, has requirements that pertain to the sectors in which set-asides can be offered as well as eligibility requirements for businesses. That is, set-aside contracts under the WOSB program can only be made in certain industries in which WOSBs were substantially underrepresented and EDWOSBs underrepresented, according to the program regulation. Additionally, only certain businesses are eligible to participate in the WOSB program. The business must be at least 51 percent owned and controlled by one or more women. The owner must provide documents demonstrating that the business meets program requirements, including submitting a document in which the owner attests to the business’s status as a WOSB or EDWOSB. The program’s authorizing statute directs that each business either be certified by a third party, or self-certified by the business owner. SBA’s final rule includes these two methods. Self-certification is free and businesses pay a fee for third-party certification. A third-party certifier is a federal agency, state government, or national certifying entity approved by SBA to provide certifications of WOSBs or EDWOSBs. To be approved as certifiers, interested organizations submit an application to SBA that contains information on the organization’s structure and staff, policies and procedures for certification, and attestations that they will adhere to program requirements. SBA has approved four organizations to act as third-party certifiers: El Paso Hispanic Chamber of Commerce; National Women Business Owners Corporation; U.S. Women’s Chamber of Commerce; and Women’s Business Enterprise National Council. The most active certifier is the Women’s Business Enterprise National Council (WBENC), which completed about 76 percent of all WOSB third- party certifications performed from August 2011 through May 2014. To conduct the certifications, WBENC uses 14 regional partner organizations. The fees for certification vary depending on a WOSB’s gross annual sales, membership status in the certifying organization, and geographic location (see table 1). In the case of businesses that seek a WOSB program certification through WBENC’s partner organizations, businesses that pay for a Women’s Business Enterprise certification (used for private- sector or some local, state, and federal procurement, but not for the WOSB program) can receive WOSB program certifications at no additional cost. We discuss the WOSB certification process in greater detail later in this report. SBA’s Office of Government Contracting administers the WOSB program by publishing regulations for the program, conducting eligibility examinations of businesses that received contracts under the WOSB or EDWOSB set-aside, deciding protests related to eligibility for a WOSB program contract award, conducting studies to determine eligible industries, and working with other federal agencies in assisting WOSBs and EDWOSBs. According to SBA officials, the agency also works at the regional and local levels with its Small Business Development Centers, district offices, and other organizations (such as Procurement Technical Assistance Centers) to assist WOSBs and EDWOSBs to receive contracts with federal agencies. The services SBA coordinates with these offices and organizations include training, counseling, mentoring, access to information about federal contracting opportunities, and business financing. According to the program regulation, businesses may use self- or third- party certification to demonstrate they are eligible for WOSB or EDWOSB status. Both certification processes require signed representations by businesses about their WOSB or EDWOSB eligibility. For this reason, SBA has described all participants in the program as self-certified. When using the self-certification option, businesses must provide documents supporting their status to the online document repository for the WOSB program that SBA maintains. Required submissions include copies of citizenship papers (birth or naturalization certificates or passports) and, depending on business type, items including copies of partnership agreements or articles of incorporation. Businesses must submit a signed certification on which the owners attest that the documents and information provided are true and accurate. Moreover, businesses must register and attest to being a WOSB in the System for Award Management (SAM), the primary database of vendors doing business with the federal government. Businesses also must make representations about their status in SAM before submitting an offer on a WOSB or EDWOSB solicitation. For third-party certification, businesses submit documentation to approved certifiers. According to third-party certifiers we interviewed, they review documents (and some may conduct site visits to businesses) and make determinations of eligibility. If approved, businesses will receive a document showing receipt of third-party certification. Business then can upload the certificate to the WOSB program repository along with documents supporting their EDWOSB or WOSB status. SBA does not track the number of businesses that self certify and could not provide information on how many self-certified businesses obtained contracts under the WOSB program. While SBA can look at an individual business profile—which lists the documents the business has uploaded to support its eligibility—in the repository to determine if a certificate from a third- party certifier is present, it has no corresponding mechanism to determine if a business lacking such a certificate was self-certified. That is, there are no data fields for certification type in any of the systems used in the program and SBA cannot generate reports to isolate information on certification type by business. According to SBA officials, such information on certification type is not needed because both certification options are treated equally under the program and, because all businesses make an attestation of status as a WOSB whether or not the business uses a third- party certifier. Therefore, SBA considers this a self-certification program. Contracting officers obtain a solicitation and conduct market research to identify businesses potentially capable of filling contract requirements. Once a contracting officer has determined that a solicitation can be set aside under the WOSB program, the officer obtains bids and selects an awardee for the contract. Only after selecting an awardee, does the agency obtain access to the business’s profile in the WOSB program repository, which lists the documents the business has uploaded to support its eligibility (the business must grant the contracting agency access). SBA’s Contracting Officer’s Guide to the WOSB Program states that contracting officers must determine that specified documents have been uploaded by the business to the program repository, but the guide does not require contracting officers to assess the validity of those documents. Only after viewing the uploaded documents would the contracting officer be able to determine if the business was likely self- certified or had a certificate from a third-party certifier. Two groups we interviewed that represent the interests of WOSBs said that contracting officers prefer third-party over self-certified businesses when selecting an awardee. A representative of one organization thought that contracting officers tended to select businesses with third-party certifications because they did not have to review as many documents in the program repository as for self-certified businesses. However, the certification method does not appear to influence contract awards. According to officials from all contracting agencies with whom we spoke and SBA officials, contracting staff are unaware of the certification method used by a business until after an awardee is selected. SBA generally has not overseen third-party certifiers and lacks reasonable assurance that only eligible businesses receive WOSB set- aside contracts. SBA has not put in place formal policies to review the performance of third-party certifiers, including their compliance with a requirement to inform businesses of the no-cost, self-certification option. The agency has not developed formal policies and procedures for reviewing required monthly reports submitted to SBA by certifiers or standardized reporting formats for the certifiers, or addressed most issues raised in the reports. Although SBA examinations have found high rates of ineligibility among a sample of businesses that previously received set- aside contracts, SBA has not determined the causes of ineligibility or made changes to its oversight of certifications to better ensure that only eligible businesses participate in the program. To date, SBA generally has not conducted performance reviews of third- party certifiers and does not have procedures in place for such reviews. According to federal standards for internal control, agencies should conduct control activities such as performance reviews and clearly document internal controls. Third-party certifiers agree to be subject to performance reviews by SBA at any time to ensure that they meet the requirements of the agreement with SBA and program certification regulations—including requirements related to the certification process, obtaining supporting documents, informing businesses about the no-cost option for WOSB program certification, and reporting to SBA on certifier activities. Before beginning the certification process, SBA requires third-party certifiers to inform businesses in writing (on an SBA-developed form) that they can self certify under the program at no cost. Certifiers, a WOSB advocacy group, and WOSBs had perspectives on fees for third-party certification. Representatives of all three certifiers with whom we spoke stated that fees their organization charged for certifications were reasonable and affordable for a small business. Staff from one WOSB advocacy organization told us that such fees could deter some businesses from participating in the program, but owners of WOSBs with which we spoke generally did not concur with this view. Certifiers with whom we spoke told us that they inform businesses about their option to self certify, but SBA does not have a method in place to help ensure that certifiers are providing this information to businesses and agency officials told us that they do not monitor whether certifiers fulfilled the requirement. SBA officials said that they believe that the no-cost option ameliorates the risk of excessive fees charged to businesses or the risk that fees would deter program participation and that because all certifiers must provide national coverage, businesses can seek lower fees. Officials also told us that they believed that businesses and advocacy groups would inform the agency if certifiers were not providing this information. However, they were not able to describe how SBA would learn from businesses that certifiers had failed to provide this information. The requirement is part of SBA’s agreement with third-party certifiers, but SBA has not described the requirement on the program web-site or made it part of informational materials to businesses. Thus, businesses may not know of this requirement without being informed by the certifier or know to inform SBA if the certifier had not fulfilled the requirement. The largest certifier, WBENC, has delegated the majority of certification activity to other entities that SBA also has not reviewed. WBENC has conducted about 76 percent of third-party certifications through May 2014. However, WBENC delegates WOSB certification responsibilities to 14 regional partner organizations. SBA neither maintains nor reviews information about standards and procedures at WBENC, including a compliance review process for each of its 14 partner organizations that WBENC told SBA it uses. SBA officials told us that they rely on information available on public websites to determine the fee structures set by WBENC’s partner organizations. SBA also does not have copies of compliance reviews that WBENC told SBA it annually conducts for each partner organization. SBA requested documents from WBENC, which included information about WBENC’s oversight of its 14 partner organizations. WBENC’s response was incomplete; WBENC referenced but did not provide its standards and procedures to oversee partner organizations. SBA told us it recognized that WBENC’s response was incomplete, and indicated it had not followed up on WBENC’s response. Without this information SBA cannot determine how WBENC has been overseeing the 14 entities to which it has delegated certification responsibilities. Although SBA has not developed or conducted formal performance reviews of certifiers, officials described activities they consider to be certifier oversight. For example, when a business is denied third-party certification but wishes to self-certify, it must subject itself to an eligibility examination by SBA before doing so. In this case, or during a bid protest, SBA conducts its own review of documentation the business submitted to the certifier. SBA officials stated that these reviews were not intended as a form of certifier oversight but described them as de facto reviews of third-party certifier performance. However, such reviews do not involve a comprehensive assessment of certifiers’ activity or performance over time. An SBA official acknowledged that the agency could do more to oversee certifiers. SBA plans to develop written procedures for certifier oversight to be included in the standard operating procedure (SOP) for the program, which remains under development. But SBA has not yet estimated when it would complete written procedures for certifier oversight or the SOP. Without ongoing monitoring and oversight of the activities and performance of third-party certifiers, SBA cannot reasonably ensure that certifiers have fulfilled the performance requirements of their agreement with SBA—including informing businesses about no-cost certification. SBA has not yet developed written procedures to review required monthly reports from certifiers and does not have a consistent format for reports. In SBA’s agreement with third-party certifiers, the agency requires each certifier to submit monthly reports that must include the number of WOSB and EDWOSB applications received, approved, and denied; identifying information for each certified business, such as the business name; concerns about fraud, waste, and abuse; and a description of any changes to the procedures the organizations used to certify businesses as WOSBs and EDWOSBs. Internal control should include documented procedures and monitoring or review activities that help ensure that review findings and deficiencies are brought to the attention of management and resolved promptly. Based on our review of each monthly report submitted from August 2011 through May 2014 (135 in total), not all reports contained consistent information. Some monthly reports were missing the owner names and contact information for businesses that had applied for certification. One certifier regularly identified potential fraud among businesses to which it had denied certification, about one or two per month for 16 of the 34 reporting months included in our review. This certifier provided detailed narrative information in its reports to SBA about its concerns. The reporting format and level of detail reported also varied among certifiers. One certifier listed detailed information on its activities in a spreadsheet. Another described its activities using narrative text and an attached list of applicants for certification. One certifier included dates for certification, recertification, and the expiration of a certification, while other certifiers did not include this information. According to SBA officials, the agency did not have consistent procedures for reviewing monthly reports, including procedures to identify and resolve discrepancies in reports or oversee how certifiers collect and compile information transmitted to the agency. SBA officials said that one official, who recently retired, was responsible for reviewing all certifier monthly reports. Current officials and staff were not able to tell us what process this official used to assess the reports. Finally, with one person responsible for reviewing monthly reports until recently, SBA generally has not followed up on issues raised in reports. Agency officials told us that early in the program they found problems with the monthly report of one of the certifiers that indicated that the certifier did not understand program requirements and they contacted the certifier to address the issue. We found additional issues that would appear to warrant follow up from SBA. For example, two businesses were denied certification by one third-party certifier and approved shortly after by another. SBA stated that it had not identified these potential discrepancies but that it was possible for businesses to be deemed ineligible, resolve the issue preventing certification, and become eligible soon after. However, according to the program regulation, if a business was denied third-party certification and the owner believed the business eligible, the owner would have to request that SBA conduct an examination to verify its eligibility to represent the business as a WOSB. According to SBA officials, the agency was unaware of this business or its certification. And, as discussed previously, one certifier regularly identified potential fraud among businesses to which it had denied certification. SBA officials told us that they had not identified or investigated this certifier’s concerns about potential fraud. When we asked SBA officials how the agency addressed such concerns, an official responded that fraudulently entering into a set-aside contract was illegal and the business would be subject to prosecution. However, without SBA following up on these types of issues, it is unclear how businesses committing fraud in the program would be prosecuted. According to an SBA official, the agency has been developing written procedures to review the monthly reports, but has not yet estimated when the procedures would be completed. The procedures will be included in SBA’s SOP for the program, which also remains under development. As noted earlier, SBA could not estimate when it would complete the SOP. Without procedures in place to consistently review monthly reports and respond to problems identified in those reports, SBA lacks information about the activities and performance of third-party certifiers and leaves concerns raised by certifiers unaddressed. SBA’s methods to verify the eligibility of businesses in its WOSB program repository include annual examinations of businesses that received set- aside contracts. SBA’s program responsibilities include conducting eligibility examinations of WOSBs and EDWOSBs, according to SBA’s compliance guide for the WOSB program and its regulation. Section 8(m) of the Small Business Act sets forth eligibility criteria businesses must meet to receive a contract under the WOSB program set-aside. SBA examines a sample of businesses with a current attestation in SAM and that received a contract during SBA’s examination year. SBA does not include in its sample businesses that had not yet obtained a WOSB program contract. According to SBA officials, staff conducting the eligibility examination review the documents each business owner uploaded to the WOSB program repository to support the representation in SAM of eligibility for WOSB or EDWOSB status. For example, agency officials said that reviewers ensure that all documents required have been uploaded and review the contents of the documents to ensure that a business is eligible. SBA said staff conducting the examination then determine that the business has met the requirements to document its status as a WOSB, or determine that information is missing or not consistent with the program requirements and the business is not eligible at the time of SBA’s review to certify itself as a WOSB. SBA officials said the agency also uses the same process to investigate the eligibility of businesses on an ad hoc basis in response to referrals from contracting agencies or other parties, such as other businesses, that question the eligibility of a business. If a business has not sufficiently documented its eligibility representation, SBA sends a letter directing the business to enter required information or documents into the repository or remove its attestation of program eligibility in SAM within 15 days. If SBA receives no response after 15 days, it sends a second letter instructing the business to remove its WOSB attestation in SAM within 5 days. In 2012 and 2013, SBA sent final 5-day letters to 44 businesses identified through annual examinations or examinations following a referral. If the business does not do so, it may be subject to enforcement actions including suspension or debarment from federal contracting or criminal penalties, according to SBA officials. An SBA official said that the agency is unaware of any such enforcement actions as part of the WOSB program. SBA also decides protests from contracting agency staff or any other interested parties relating to a business’s eligibility. SBA considers protests if there is sufficient, credible evidence to show that the business may not be at least 51 percent owned and controlled by one or more women, or if the business has failed to provide documents required to establish eligibility for the program. Once SBA has obtained a protest, it examines documents submitted in the case, makes a determination of program eligibility based on the content of these documents and notifies relevant parties—typically, the contracting officer, protester (if not the same), and the business—of the determination. If eligible for the set- aside, the contracting officer may make an award to the business. Otherwise, the contracting officer may not award a contract to the business in question. From program implementation in April 2011 through July 2, 2014, SBA responded to 27 protests, and in 7 protests the businesses involved were found to be ineligible for the WOSB program. In the remaining protests, the businesses were found eligible, the party that filed the protest withdrew it, or SBA dismissed the protest. As described earlier in the report, contracting officers check for the presence of documents in the repository when making a WOSB program award. This could be considered part of SBA’s framework to oversee certifications, but the requirement for contracting officers to review documents is limited to ensuring that businesses have uploaded documents listed in the regulation. Representatives from some of the contracting offices we interviewed believed that they had to assess the validity of the documents or did not think they had the necessary qualifications to assess the documents. However, program guidance does not require contracting officers to assess the validity of these documents, and SBA officials told us contracting officers are not expected to evaluate the eligibility of businesses. SBA activities relating to eligibility verifications, particularly examinations, have several weaknesses. For instance, SBA has not yet developed procedures to conduct annual eligibility examinations although such efforts are in process, according to officials; has not evaluated the results of the eligibility examinations in the context of how the actions of businesses, contracting agencies, and third-party certifiers may have contributed to the high levels of incomplete and inaccurate documentations found in examinations; and has not assessed its internal controls or made procedural changes in response to the findings of its eligibility examinations. According to federal standards for internal control, agencies should have documented procedures, conduct monitoring, and ensure that any review findings and deficiencies are brought to the attention of management and are resolved promptly. Corrective action is to be taken or improvements made within established time frames to resolve the matters brought to management’s attention. Also, management needs to comprehensively identify risks the agency faces from both internal and external sources, and management should consider all significant interactions between the agency and all other parties. SBA conducted annual eligibility examinations in 2012 and 2013 on a sample of businesses that received contracts under the WOSB program and found that 42 percent of businesses in the 2012 sample were ineligible for WOSB program contract awards on the date of its review, and 43 percent in the 2013 sample were ineligible. According to SBA officials, both self- and third-party certified businesses were found ineligible at the time of review. SBA staff reviewed the documents that each business in its sample had posted to the program repository to ensure the businesses had sufficiently supported their attestations as required in program regulations. However, SBA could not provide documentation of a consistent procedure to examine each business. SBA staff reviewing documentation in the repository did not have guidelines describing how to conduct each review. SBA officials told us that they have been developing written procedures to conduct annual eligibility examinations, estimated a completion date that the agency did not meet, and that the agency does not have an estimation of completion. SBA officials explained that they determined the eligibility of businesses on a given date after the business received a contract. According to SBA officials, a finding of ineligibility does not mean the business was ineligible at the time of contract award because the status of the business might have changed. Although SBA officials did not know whether businesses examined were eligible at the time of award, the high rate of ineligibility it found raises questions about whether contracts may have been awarded to ineligible businesses. According to SBA officials, information in its repository constantly changes and SBA has yet to determine how or if a business was eligible when it received a WOSB set-aside contract. SBA officials told us that they believe they may be able to make such a determination but could not describe exactly how they would conduct the review or confirm that the business was an eligible WOSB or EDWOSB at the time of award. As part of its annual examination, SBA only examines businesses at some time after the business received a contract and, therefore, SBA’s examination is limited in its ability to identify potentially ineligible businesses prior to a contract award. SBA officials said that after the annual examinations they did not institute new controls to guard against ineligible businesses receiving program contracts because they described the examinations and the results as a method to gain insight about the program—specifically, that WOSBs may lack understanding of program eligibility requirements—and not a basis for change in oversight procedures. According to SBA officials, the levels of ineligibility found during the examinations were similar to those found in examinations of its other socioeconomic programs. SBA officials said businesses were deemed ineligible because they did not understand the documentation requirements for establishing eligibility and also attributed the ineligibility of third-party certified businesses to improper uploading of documents by the businesses themselves. SBA officials said they needed to make additional efforts to train businesses to properly document their eligibility. However, SBA officials could not explain how they had determined lack of understanding was the cause of ineligibility among businesses and have not made efforts to confirm that this was the cause. As a result, they have missed opportunities to obtain meaningful insights into the program. SBA regarded the bid protest as means of identifying ineligibility. SBA officials referred to the program as a self-policing program, because of the bid protest function through which competing businesses, contracting officers, or SBA can protest a business’s claim to be a WOSB or EDWOSB and eligible for contract awards under the program. In addition, an SBA official stated that business owners affirm their status when awarded a contract and are subject to prosecution if they had done so and later were found to have been ineligible at the time of contract award—which the official considered a program safeguard. However, without (1) developing program eligibility controls that include procedures for conducting annual eligibility examinations; (2) analyzing the results of the examinations to understand the underlying causes of ineligibility; (3) developing new procedures for examinations, including expanding the sample of businesses to be examined to include those that did not receive contracts; and (4) investigating businesses based on examination results, SBA may continue to find high rates of ineligibility among businesses registered in the WOSB program repository. In turn, this would continue to expose the program to the risk that ineligible businesses may receive set-aside contracts. Also, by reviewing the eligibility of businesses that have not received program contracts, SBA may improve the quality of the pool of potential program award recipients. Set-asides under the WOSB program to date have had a minimal effect on overall contracting obligations to WOSBs and attainment of WOSB contracting goals. WOSB program set-aside obligations increased from fiscal year 2012 to fiscal year 2013. The Department of Defense (DOD), the Department of Homeland Security (DHS), and the General Services Administration (GSA) accounted for the majority of these obligations. The WOSB program set-asides represented less than 1 percent of total federal awards to women-owned small businesses. Contracting officers, WOSBs, and others with whom we spoke suggested a number of program changes that might increase use of the WOSB program, including increasing awareness, allowing for sole-source awards, and expanding the list of eligible industries for the set-aside program. WOSB program set-aside obligations increased from fiscal year 2012 to fiscal year 2013. Obligations to WOSBs under the WOSB set-aside program increased from $33.3 million in 2012 to $39.9 million in 2013, and obligations to EDWOSBs increased from $39.2 million in 2012 to $60.0 million in 2013. The National Defense Authorization Act for Fiscal Year 2013 removed the dollar cap on contract awards eligible under the WOSB set-aside program, which may account for some of the increase in obligations from 2012 to 2013. SBA officials told us that they expect increased use of the program in the future as a result of this change. As shown in table 2, three federal agencies—DOD, DHS, and GSA— collectively accounted for the majority of the obligations awarded under the set-aside program. DOD (Air Force, Army, Navy, and all other defense agencies) accounted for 62.2 percent of obligations, DHS for 10.7 percent, and GSA for 4.0 percent of obligations. No other individual agency accounted for more than 3.4 percent of obligations awarded under the program. From April 2011 through May 2014, WOSB program set-asides constituted a very small percentage (0.44 percent) of all the contracting obligations awarded to WOSBs (see fig. 1). The majority of obligations awarded to WOSBs were made under other, longer-established set-aside programs. For example, if eligible, a WOSB could receive a contracting award under the 8(a), HUBZone, or SDVOSBC programs, or through a general small business set-aside. WOSBs also can obtain federal contracts without set-asides (through open competition). Based on our analysis of FPDS-NG data of federal contracting agencies, contract obligations awarded through the WOSB set-aside totaled $228.9 million, or 0.44 percent, of the $52.6 billion in contract obligations awarded to WOSBs from April 2011 through May 2014. Additionally, the WOSB set-aside has had relatively little impact on federal agency achievement of goals for contracting to WOSBs, because the program set-asides represent a very small percentage of all contracting awards to WOSBs. Since 2011, the overall percentage of contracting obligations awarded to WOSBs (through any program or open competition) has remained below the government-wide goal of 5 percent (see table 3). Goal achievement by the three contracting agencies with the highest amount of obligations through the set-aside program varied. For example, DOD did not meet its 5 percent goal for contracting obligations to WOSBs in any of the 3 years. DHS and GSA met their goals in all 3 years. Excluding obligations made by DOD, about 5.7 percent of total federal contracting obligations to small businesses included in SBA’s fiscal year 2013 Small Business Goaling Report were awarded to WOSBs. For the 24 agencies subject to the Chief Financial Officers Act listed in SBA’s scorecards, 19 met their WOSB contracting goal in fiscal year 2012 and 20 met their goal in fiscal year 2013. One agency missed its goal in fiscal year 2012 but met its goal in fiscal year 2013. Four agencies (the same four each year) did not meet their goal for either year. Selected federal contracting officials, businesses that received a WOSB or EDWOSB set-aside, third-party certifiers, and a WOSB advocacy organization with which we spoke gave their perspectives on existing challenges and possible changes to increase program usage. Complexity and burdensome requirements. Contracting officers described challenges to using the WOSB set-aside. Some contracting officers noted that generally, all contracts awarded to WOSBs count for the purposes of meeting agencies’ 5 percent goal and that from their perspective it does not matter whether a contract is awarded to a WOSB using the WOSB program, another set-aside program, or open competition. Some contracting officers said that WOSB program requirements were burdensome or complex relative to other SBA programs with set-asides. Unlike the other programs, the WOSB program requires the use of a separate electronic repository, maintained by SBA, to collect and store certification documents. One contracting officer noted that the contracting process slowed when officials had to seek information from the repository. Another contracting officer told us the role of the contracting officer included confirming that businesses had uploaded required documents in the SBA repository based on a list of required documents in the program regulation—but noted this task was not required under other contracting programs. Lack of awareness and agency commitment. Representatives from advocacy groups also identified awareness of and commitment to the program as another area for improvement. An advocacy group representative told us that some of their member WOSBs had encountered confusion and reluctance on the part of contracting officers to use the program. Another advocacy group said that SBA should engender more commitment to the program among contracting officers and agencies. Another representative noted that there are no consequences for agency leaders for failure to meet contracting goals for WOSBs or use the set-aside program. SBA officials described to us consequences that included a low rating in the publicly available SBA contracting scorecard, which may draw negative attention to the agency. Also, the National Defense Authorization Act for Fiscal Year 2013 includes the extent to which agencies meet contracting goals as a competency by which members of the senior executive service are rated. All of the businesses we interviewed that received WOSB program contracts cited the need for increased agency outreach or awareness of the program. For example, one participant advocated increasing contracting officer awareness and understanding of how an agency could benefit from using the WOSB set-aside program. Changes to increase use of program. Contracting officers also identified changes they believe could increase use of the WOSB set- aside. For example, some noted that allowing sole-source contracts could increase program use. Currently, contract officers can establish a set- aside only if there is a reasonable expectation that at least two eligible WOSBs will submit a bid for the contract. Some contracting officers suggested expanding the list of North American Industry Association Classification System (NAICS) codes eligible for use under the WOSB set-aside. For example, one contracting office said that the designated NAICS for the set-aside program did not meet their procurement needs. One representative pointed out that SBA had designated some NAICS codes just for EDWOSB and others for WOSBs. SBA officials told us the agency does not have the authority to change the list of industry sectors eligible for program set-asides without conducting a study of industries in which WOSBs were underrepresented or substantially underrepresented. Representatives from all of the WOSB advocacy groups, three of which are also third-party certifiers, said that expanding the NAICS codes would improve the program. For example, one advocacy group said that certain WOSBs would like to obtain WOSB or EDWOSB set-asides but did not have NAICS codes that were listed as eligible. Another said that they would not limit the number of eligible industries under the program. Finally, the businesses we interviewed also believed that allowing sole- source awards or adding more NAICS codes would increase program use. Six participants commented on the limitations for awarding sole- source contracts through the WOSB set-aside. Five participants felt that the NAICS codes under the program were limited. One program participant mentioned that she felt that limiting set-asides for the WOSB program to certain NAICS codes was inconsistent with other SBA programs with set-asides, such as 8(a), HUBZone, and SDVOSBC. She gave an example of an agency that issued a draft solicitation that sought to award two contracts each to WOSB set-asides, HUBZone, and SDVOSBC businesses. However when it became clear that the contract was not in an eligible NAICS code for the WOSB program, the agency converted the two contracts intended for WOSB set-aside to a general small business category. Some program participants also mentioned positive aspects of the program. Five participants believed that the program provided greater opportunities for their businesses and WOSBs in general. Furthermore, five of the six businesses with whom we spoke that received only one or two contracts felt that the program improved their ability to compete for a federal contract. For example, one participant noted that while she has not seen many set-aside solicitations for the NAICS code under which her business primarily operates, the existence of the program prompted her to bid on set-asides under other NAICS codes. As the only federal procurement set-aside specifically for women-owned businesses, the WOSB program could play an important role in limiting competition to certain federal contracts for WOSBs and EDWOSBs that are underrepresented in their industries. However, weaknesses in multiple areas of SBA’s management of the program hinder effective oversight of the WOSB program. Specifically, SBA has limited information about the performance of its certifiers and does not use what information is available to help ensure certifiers adhere to program requirements, a deficiency exacerbated by the highest-volume certifier’s—about 76 percent of third-party certifications—delegation of duties to 14 partner organizations. An incomplete response to SBA’s request for information on WBENC’s certification process demonstrates the need for an oversight framework to ensure that certifiers adhere to agreements with SBA. SBA did not follow up on the incomplete response from WBENC, which raises questions about SBA’s commitment to oversight of the certifiers. Furthermore, the lack of procedures for review and analysis of monthly certifier reports means that SBA has forgone opportunities to oversee certifiers and pursue concerns about fraud of individual businesses identified by one certifier. According to federal standards for internal control, agencies should conduct control activities such as performance reviews and clearly document internal controls. Formalizing existing ad hoc processes (by developing procedures) will help SBA obtain the information necessary to better ensure that third-party certifiers fulfill the requirements of their agreements with SBA—an effort SBA said it plans to undertake, although it has not estimated a completion date. Additionally, SBA could use results and insights from reviews of certifier reports— which are to include concerns about businesses—to inform its processes for eligibility verification, particularly examinations. Weaknesses related to SBA’s examination of program participants and approach to enforcement mean that the agency cannot offer reasonable assurance that only eligible businesses participate in the program. Although the agency’s examinations found high rates of ineligibility, SBA has not yet formalized examination guidance for staff or followed up on examination results to determine the status of ineligible businesses at the time of contract award. SBA also has not focused on identifying factors that may be causing businesses to be found ineligible; rather, the agency appears to have determined that more training for businesses about eligibility requirements could address the issue. However, training alone would be a limited response to examination results, and SBA officials could not say what analysis determined training to be the relevant response. Additionally, the sample of businesses that SBA examines includes only those businesses that received WOSB set-aside contracts. All these factors limit SBA’s ability to better understand the eligibility of businesses before applying for and being awarded contracts. Rather than gather and regularly analyze information related to program eligibility, SBA relies on other parties to identify potential misrepresentation of WOSB status (through bid-protest filings and less formal mechanisms)—a reactive and limited approach to oversight. Federal standards for internal control state that agencies should have documented procedures, conduct monitoring, and ensure that any review findings and deficiencies are brought to the attention of management and are resolved promptly. Additionally, the standards state that management needs to comprehensively identify risks the agency faces from both internal and external sources. By expanding its examination of firms and analyzing and following up on the results, SBA could advance the key program goal of restricting competition for set-aside contracts to WOSBs and EDWOSBs. We make the following recommendations to improve management and oversight of the WOSB program. To help ensure the effective oversight of third-party certifiers, the Administrator of SBA should establish and implement comprehensive procedures to monitor and assess performance of certifiers in accord with the requirements of the third-party certifier agreement and program regulations. To provide reasonable assurance that only eligible businesses obtain WOSB set-aside contracts, the Administrator of SBA should enhance examination of businesses that register to participate in the WOSB program, including actions such as: promptly completing the development of procedures to conduct annual eligibility examinations and implementing such procedures; analyzing examination results and individual businesses found to be ineligible to better understand the cause of the high rate of ineligibility in annual reviews, and determine what actions are needed to address the causes; and implementing ongoing reviews of a sample of all businesses that have represented their eligibility to participate in the program. We provided a draft of this report to SBA, DHS, DOD, and GSA for review and comment. SBA provided written comments that are described below and reprinted in appendix II. The other agencies—DHS, DOD, and GSA—did not provide comments on this report. SBA generally agreed with our recommendations and said that the agency is already in the process of implementing many of our recommendations. While SBA generally agreed with our recommendations, the agency stated that the report could be clearer about the program examination process. Specifically, SBA stated that the agency has authority to conduct eligibility examinations at any time for any firm asserting eligibility to receive WOSB program contracts. We have added information to the draft to clarify this point. The draft report we sent to SBA for comment discussed the agency’s process of conducting annual eligibility examinations and provided a description of SBA’s current process. SBA also stated that “the report recommends that SBA conduct ongoing annual eligibility examinations and implement such procedures.” However, our report recommends that SBA complete the development of procedures to conduct annual eligibility examinations (which SBA has conducted for the past 2 years) and implement such procedures. We separately recommend implementing ongoing reviews of a sample of all businesses that have represented their eligibility to participate in the program. We do not specify that these eligibility reviews, which are eligibility examinations, should be annual. SBA could choose to conduct these reviews more frequently if deemed appropriate. Whether SBA conducts eligibility examinations annually or more frequently, examinations should be consistently conducted by following written procedures and the results assessed to determine the causes of ineligibility. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to appropriate congressional committees and members, the Secretary of DOD, the Secretary of DHS, the Administrator of GSA, the Administrator of SBA, and other interested parties. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This report examines the Women-Owned Small Business (WOSB) program of the Small Business Administration (SBA). More specifically, the report (1) describes how WOSBs and economically disadvantaged WOSBs (EDWOSBs) are certified as eligible for the program, (2) examines the extent to which SBA has implemented internal control and oversight procedures of WOSB program certifications, and (3) discusses the effect the program has had on federal contracting opportunities available to WOSBs or EDWOSBs. To describe how businesses are certified as eligible for the program, we reviewed SBA policies and procedures to establish program eligibility including the responsibilities of businesses, third-party certifiers, contracting officers, and SBA. We interviewed SBA officials from the Office of Government Contracting. To evaluate how certification procedures may affect program participation, we obtained from SBA monthly reports (from September 2011 through May 2014) from each of the four third-party certifiers. We took steps to develop a dataset we could use for our analyses, including creating and merging monthly spreadsheets, identifying missing business names, and clearing the list of duplicate entries. We compared this dataset with Federal Procurement Data System-Next Generation (FPDS-NG) data for businesses that received a WOSB program set-aside contract. We determined that the data on how many third-party certified businesses received contracts as part of the WOSB program were sufficiently reliable for our purposes by corroborating a sample of businesses we identified as third-party certified with documentation for the businesses in the WOSB program repository. We were not able to determine how many self-certified businesses obtained contracts under the program, because the format of the documentation maintained in the SBA repository does not include a record of documents that were present at the time of contract award. We also interviewed a sample of contracting officers from selected components in the Department of Defense (DOD), Department of Homeland Security (DHS), and the General Services Administration (GSA). We selected these three agencies to represent a range of program participation based on the number and total obligation amounts of active set-aside contracts awarded in 2011 through 2013. Within DOD and DHS, we selected two components from each that demonstrated high- and mid-level program participation (based on number of contracts and obligation amounts). For DOD, we selected the U.S. Army and Defense Logistics Agency. For DHS, we selected the U.S. Coast Guard, and Customs and Border Protection. Within each of the components and GSA, we compared FPDS-NG data on program activity by obligation amount, contract number, and North American Industry Classification System (NAICS) codes for 2011 through 2013. For each, we selected two contracting offices using the same criteria we used to select agencies, which included identifying a high- and mid-level program obligation amount and offices with multiple contracts and under multiple NAICS codes. We excluded one Customs and Border Patrol office because only one office awarded multiple contracts under multiple NAICS codes. We also interviewed three of the four SBA-approved third-party certifiers (the El Paso Hispanic Chamber of Commerce, the National Women Business Owners Corporation, and the U.S. Women’s Chamber of Commerce). We were unable to interview the Women’s Business National Enterprise Council (WBENC). SBA requested documentation of WBENC’s oversight procedures for the certification activity and fee structures of its regional partner organizations. WBENC provided a written response to SBA, which was not fully responsive to the request, as discussed in the report. We conducted semi-structured interviews with a sample of 10 businesses that were certified for the program, 9 of which had received a set-aside contract. To evaluate SBA’s oversight of certification, we reviewed the program regulation and program documents, agreements with third-party certifiers, 135 monthly reports submitted by all four third-party certifiers, and letters SBA sends to inform businesses when their WOSB or EDWOSB status is in question, among other documents. We discussed the agency’s procedures to monitor certifiers and ensure participant eligibility with SBA officials from the Office of Government Contracting. We compared officials’ descriptions of their oversight activities with federal internal control standards. We inquired about documentation and eligibility examinations conducted in 2012 and 2013, and a planned examination for 2014, and reviewed reports of the 2012 and 2013 examination results. We also inquired about ongoing plans to develop a standard operating procedure, and future plans to evaluate the program. To determine what effect, if any, the WOSB program has had on federal contracting opportunities available to WOSBs, we identified set-aside contract obligations in FPDS-NG from April 2011 through May 2014 to identify trends in program participation by contracting agencies included in both FPDS-NG and SBA goaling reports. Using a review of FPDS-NG documentation and electronic edit checks, we deemed these data sufficiently reliable for our purposes. We also analyzed SBA goaling reports from 2011 through 2013 to describe progress made towards meeting the 5 percent goal for federal contracting to WOSBs. We conducted semi-structured interviews with a sample of 10 businesses there were certified for the program, 9 of which had received a set-aside contract. We selected this nongeneralizable sample of businesses to reflect whether they had been certified by a third-party entity, or had self- certified. While the results of these interviews could not be generalized to all WOSB program participants, they provided insight into the benefits and challenges of the program. We interviewed SBA officials and contracting agency officials about the extent to which the program has met its statutory purpose of increasing contracting opportunities for WOSBs. Finally, we interviewed industry advocates, including three of the four third-party certifiers (the El Paso Hispanic Chamber of Commerce, the National Women Business Owners Corporation, and U.S. Women’s Chamber of Commerce) and one other industry advocate (Women Impacting Public Policy) actively involved in promoting the program with WOSBs. We conducted this performance audit from August 2013 to October 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Andrew Pauline (Assistant Director), Julie Trinder-Clements (analyst-in-charge), Pamela Davidson, Daniel Kaneshiro, Julia Kennon, Barbara Roesmann, Jessica Sandler, and Jena Sinkfield made key contributions to this report.
In 2000, Congress authorized the WOSB program to increase contracting opportunities for WOSBs by allowing contracting officers to set aside procurements to such businesses. SBA, which administers the program, issued implementing regulations that became effective in 2011. GAO was asked to review the WOSB program. This report examines (1) how businesses are certified as eligible for the WOSB program, (2) SBA's oversight of certifications, and (3) the effect the program has had on federal contracting opportunities available to WOSBs or EDWOSBs. GAO reviewed relevant laws, regulations, and program documents; analyzed federal contracting data from April 2011 through May 2014; and interviewed SBA, officials from contracting agencies selected to obtain a range of experience with the WOSB program, third-party certifiers, WOSBs, and organizations that represent their interests. Businesses have two options to certify their eligibility for the women-owned small business (WOSB) program. Whether self-certifying at no cost or using the fee-based services of an approved third-party certifier, businesses must attest that they are a WOSB or an economically disadvantaged WOSB (EDWOSB). Businesses also must submit documents supporting their attestation to a repository the Small Business Administration (SBA) maintains (required documents vary depending on certification type), and, if they obtain a third-party certification, to the certifier. SBA performs minimal oversight of third-party certifiers and has yet to develop procedures that provide reasonable assurance that only eligible businesses obtain WOSB set-aside contracts. For example, SBA generally has not reviewed certifier performance or developed or implemented procedures for such reviews, including determining whether certifiers inform businesses of the no-cost self-certification option, a requirement in the agency's agreement with certifiers. SBA also has not completed or implemented procedures to review the monthly reports that third-party certifiers must submit. Without ongoing monitoring and oversight of the activities and performance of third-party certifiers, SBA cannot reasonably assure that certifiers fulfill the requirements of the agreement. Moreover, in 2012 and 2013, SBA found that more than 40 percent of businesses (that previously received contracts) it examined for program eligibility should not have attested they were WOSBs or EDWOSBs at the time of SBA's review. SBA officials speculated about possible reasons for the results, including businesses not providing adequate documentation or becoming ineligible after contracts were awarded, but SBA has not assessed the results of the examinations to determine the actual reasons for the high numbers of businesses found ineligible. SBA also has not completed or implemented procedures to conduct eligibility examinations. According to federal standards for internal control, agencies should have documented procedures, conduct monitoring, and ensure that any review findings and deficiencies are resolved promptly. As a result of inadequate monitoring and controls, potentially ineligible businesses may continue to incorrectly certify themselves as WOSBs, increasing the risk that they may receive contracts for which they are not eligible. The WOSB program has had a limited effect on federal contracting opportunities available to WOSBs. Set-aside contracts under the program represent less than 1 percent of all federal contract obligations to women-owned small businesses. The Departments of Defense and Homeland Security and the General Services Administration collectively accounted for the majority of the $228.9 million in set-aside obligations awarded under the program between April 2011 and May 2014. Contracting officers, business owners, and industry advocates with whom GAO spoke identified challenges to program use and suggested potential changes that might increase program use, including allowing sole-source contracts rather than requiring at least two businesses to compete and expanding the list of 330 industries in which WOSBs and EDWOSBs were eligible for a set-aside. GAO recommends that SBA, among other things, establish and implement procedures to monitor certifiers and improve annual eligibility examinations, including by analyzing examination results. SBA generally agreed with GAO's recommendations.
GPS is a global positioning, navigation, and timing network consisting of space, ground control, and user equipment segments that support the broadcasts of military and civil GPS signals. These signals each include positioning and timing information, which enables users with GPS receivers to determine their position, velocity, and time, 24 hours a day, in all weather, worldwide. GPS is used by all branches of the military to guide troops’ movements, integrated logistics support and battlespace situational awareness, and communications network synchronization. In addition, bombs and missiles are guided to their targets by GPS signals and GPS is used to locate military personnel in distress. Early in the development of GPS, the scope was expanded to include complementary civil capabilities. Over time, GPS has become a ubiquitous infrastructure underpinning major sections of the economy, including telecommunications, electrical power distribution, banking and finance, transportation, environmental and natural resources management, agriculture, and emergency services in addition to the array of military operations it services. For instance, civil agencies, commercial firms, and individuals use GPS to accurately navigate from one point to another. Commercial firms use GPS to route their vehicles, as do maritime industries and mass transit systems. In addition to navigation, civil departments and agencies and commercial firms use GPS and GPS augmentations to provide high-accuracy, three- dimensional positioning information in real time for use in surveying and mapping. The aviation community worldwide uses GPS and GPS augmentations to increase the safety and efficiency of flight. GPS is also used in the agricultural community for precision farming, including farm planning, field mapping, soil sampling, tractor guidance, and crop scouting. GPS helps companies and governments place satellites in precise orbits, and at correct altitudes, and helps monitor satellite constellation orbits. The precise time that GPS broadcasts is crucial to economic activities worldwide, including communication systems, electrical power grids, and financial networks. GPS operations consist of three segments—the space segment, the ground control segment, and the user equipment segment. All segments are needed to take full advantage of GPS capabilities. The GPS space segment consists of a constellation of satellites that move in six orbital planes approximately 20,200 kilometers above the earth. GPS satellites broadcast encrypted military signals and civil signals. In recent years, because numerous satellites have exceeded their design life, the constellation has grown to 31 active satellites of various generations. However, DOD predicts that over the next several years many of the older satellites in the constellation will reach the end of their operational life faster than they will be replenished, thus decreasing the size of the constellation from its current level and potentially reducing the accuracy of the GPS service. The GPS ground control segment is comprised of a Master Control Station at Schriever Air Force Base, Colorado; an Alternate Master Control Station at Vandenberg Air Force Base, California; 6 Air Force and 11 National Geospatial-Intelligence Agency monitoring stations; and four ground antennas with uplink capabilities. Information from the monitoring stations is processed at the Master Control Station to determine satellite clock and orbit status. The Master Control Station operates the satellites and regularly updates the navigation messages on the satellites. Information from the Master Control Station is transmitted to the satellites via the ground antennas. The GPS user equipment segment includes military and commercial GPS receivers. These receivers determine a user’s position by calculating the distance from four or more satellites using the navigation message on the satellites to triangulate its location. Military GPS receivers are designed to utilize the encrypted military GPS signals that are only available to authorized users, including military and allied forces and some authorized civil agencies. Commercial receivers use the civil GPS signal, which is publicly available worldwide. In 2000, DOD began an effort to modernize the space, ground control, and user equipment segments of GPS to enhance the system’s performance, accuracy, and integrity. Table 1 shows the modernization efforts for the space and ground control segment. Full use of the military and civil GPS signals requires a ground control system that can manage these signals. Newer software will upgrade the ground control to a service oriented—or “plug and play”—architecture that can connect to broader networks. In order to utilize the modernized military signal from the ground, military users require new user equipment with this capability, which will be provided by the military GPS user equipment program. The 2004 U.S. Space-Based Positioning, Navigation and Timing (PNT) policy established a management structure to bring civil and military departments and agencies together to form an interagency, multiuse approach to program planning, resource allocation, system development, and operations. The policy also encourages cooperation with foreign governments to promote the use of civil aspects of GPS and its augmentation services and standards with foreign governments and other international organizations. As part of the management structure, an executive committee advises and coordinates among U.S. government departments and agencies on maintaining and improving U.S. space-based PNT infrastructures, including GPS and related systems. The executive committee is co-chaired by the Deputy Secretaries of the Department of Defense and the Department of Transportation, and includes members at the equivalent level from the Departments of State, Commerce, Homeland Security, Interior, Agriculture, the Joint Chiefs of Staff, and the National Aeronautics and Space Administration (NASA). Figure 2 describes the National Space-Based PNT organization structure. The departments and agencies have various assigned roles and responsibilities. For example, DOD is responsible for the overall development, acquisition, operation, security, and continued modernization of GPS. It has delegated acquisition responsibility to the Air Force, though other DOD components and military services are responsible for oversight, some aspects of user equipment development, and for funding some parts of the program. The Department of Transportation has the lead responsibility for the coordination of civil requirements from all civil department and agencies. The Department of State leads negotiations with foreign governments and international organizations on GPS positioning, navigation, and timing matters or regarding the planning, operations, management, and/or use of GPS. (See app. III). The Air Force’s GPS IIF acquisition initially was not well executed, and currently poses technical problems. The Air Force is implementing lessons learned from the GPS IIF effort as it starts the GPS IIIA program. However, based on our analysis, the GPS IIIA program faces a compressed schedule along with new challenges to deliver the satellites on time. A slip in the launch of the GPS IIIA satellites could increase the likelihood that the GPS constellation will fall below the number of satellites required to provide the level of GPS service the U.S. government has committed to provide. This would not only have implications for military users but also for the larger community of GPS users, who may be less aware and equipped to deal with gaps in coverage. However, the Air Force is evaluating different approaches that could potentially reduce the risk of degrading the GPS service. The GPS IIF contract was awarded during an era of acquisition reform that centered on an approach called Total System Performance Responsibility (TSPR). TSPR gave a contractor total responsibility for the integration of an entire weapon system and for meeting DOD’s requirements. This approach was intended to facilitate acquisition reform and enable DOD to streamline a cumbersome acquisition process and leverage innovation and management expertise from the private sector. However, DOD later found that TSPR magnified problems on a number of satellite acquisition programs because it was implemented in a manner that enabled requirements creep and poor contractor performance. For GPS IIF, the TSPR approach resulted in relaxed specifications and inspections of the contractor, loss of quality in the manufacturing process, and poor-quality parts that caused test failures, unexpected redesigns, and the late delivery of parts. The contractor did not provide data on design drawings and statistical process control techniques were not used to monitor production. According to GPS program officials, the GPS IIF program was also negatively impacted by multiple contractor mergers, acquisitions, and moves. In 1996, shortly after Rockwell won the IIF contract, the company’s aerospace and defense units, including the Seal Beach, California, facility where the IIF satellites were to be manufactured, were acquired by Boeing. In December 1997, Boeing merged with McDonnell Douglas and took over its Delta launch vehicle unit in Huntington Beach, California, and subsequently GPS work was moved to that facility. In October 2000, Boeing acquired Hughes Electronics Corporation’s space and communications business and related operations. Boeing took over the Hughes facility in El Segundo, California, and once again, GPS work was moved to another facility. As these events occurred, the prime contractor consolidated development facilities to remain competitive. In addition, the prime contractor lost valuable workers and knowledge, causing inefficiencies in the program. Shortly after the IIF contract was awarded in 1996, the Air Force also added requirements. For example, the government decided to accelerate the fielding of new civil and military GPS signals. Flexible power capabilities were added to IIF several years later. These new requirements drove design changes and resulted in technical issues and cost overruns that impacted the schedule. According to a GPS IIF program official, the combination of significant requirements additions, loss of engineering expertise, parts obsolescence, and fundamental design changes together caused the contractor to “lose the recipe” for the IIF space vehicle. In essence, by the completion of the design phase, the IIF space vehicle was to be built in a third location, by different people, in a way that was not initially anticipated. In addition, the program suffered from a lack of management continuity. Since the program’s inception, the IIF program has had seven different program managers, the first five of whom only served 1 year each. According to a former deputy program director of the GPS program office, past GPS programs seemed to operate well for a number of reasons. The programs (1) never added major modifications to ongoing programs and (2) had no qualms in terminating contractors if work did not meet standards, business practices, or major milestones. Furthermore, the GPS program performed more on-site contract management to increase communications. This approach eliminated surprises like cost and schedule overruns and held the contractor to a high level of performance. Lastly, the former director noted that it was important to balance the responsibility assigned to the program managers with the authority they needed to properly implement the program. Prior GAO reviews have identified all of these practices as essential to program execution. The Air Force has since taken action to improve the IIF program. In 2006, the program office increased its personnel at the contractor’s facility to observe operations and to verify that corrective measures were being taken to address deficiencies in the contractor’s cost and schedule reporting system (also known as earned value management). The Air Force increased the number of personnel to work on the contractor site, which included military and civilian personnel, as well as Defense Contract Management Agency personnel and system engineering contractors. Greater presence at the contractor’s factory has enabled the government to find out about problems as they happen and work with the contractor to come up with solutions and resolve issues quicker, according to GPS program officials. Nonetheless, the program has experienced more technical problems. For example, last year, during the first phase of thermal vacuum testing (a critical test to determine space-worthiness that subjects the satellite to space-like operating conditions), one transmitter used to send the navigation message to the users failed. The program suspended testing in August 2008 to allow time for the contractor to identify the causes of the problem and take corrective actions. The program also had difficulty maintaining the proper propellant fuel-line temperature; this, in addition to power failures on the satellite, delayed final integration testing. In addition, the satellite’s reaction wheels, used for pointing accuracy, were redesigned because on-orbit failures on similar reaction wheels were occurring on other satellite programs—this added about $10 million to the program’s cost. As a result of these problems, the IIF program experienced cost increases and schedule delays. The launch of the first IIF satellite has been delayed until November 2009—almost 3 years late. According to the program office, the cost to complete GPS IIF will be about $1.6 billion—about $870 million over the original cost estimate of $729 million. In addition, in 2006 we testified that diffuse leadership over military space acquisitions was another factor contributing to late delivery of capability and cost growth. We noted that the diverse array of officials and organizations involved with a space program has made it difficult to pare back and control requirements. GPS was one example we cited. According to the Air Force, in 1998 the government decided to accelerate the fielding of new civil and military GPS signals and added requirements for these signals to the IIR and IIF GPS satellites. These new requirements drove design changes and resulted in technical issues, cost overruns, and program delays. The problems experienced on the IIF program are not unlike those experienced in other DOD space system acquisitions. We have previously reported that the majority of major acquisition programs in DOD’s space portfolio have experienced problems during the past two decades that have driven up costs, caused delays in schedules, and increased technical risk. DOD has restructured several programs in the face of delays and cost growth. At times, cost growth has come close to or exceeded 100 percent, causing DOD to nearly double its investment without realizing a better return on investment. Along with the increases, many programs are experiencing significant schedule delays—as much as 7 years—postponing delivery of promised capabilities to the warfighter. Outcomes have been so disappointing in some cases that DOD has gone back to the drawing board to consider new ways to achieve the same, or less, capability. Our work has identified a variety of reasons for the cost growth, many of which surfaced in GPS IIF. Generally, we have found that DOD starts its space programs too early, that is before it has assurance that the capabilities it is pursuing can be achieved within resources and time constraints. We have also tied acquisition problems in space to inadequate contracting strategies; contract and program management weaknesses; the loss of technical expertise; capability gaps in the industrial base; tensions between labs that develop technologies for the future and current acquisition programs; divergent needs in users of space systems; and other issues that have been well documented. We also noted that short tenures for top leadership and program managers within the Air Force and the Office of the Secretary of Defense have lessened the sense of accountability for acquisition problems and further encouraged a view of short-term success. Several other studies have raised similar issues. In 2003, a study conducted for the Defense Science Board, for example, found that government capabilities to lead and manage the space acquisition process have seriously eroded, particularly within program management ranks. A 2005 Defense Science Board study focused specifically on the future of GPS found that the program was hampered by sometimes overlapping, sometimes disconnected roles of Office of the Secretary of Defense staff components, the Joint Staff, and the Air Force. More recently, a commission formed pursuant to the John Warner National Defense Authorization Act for Fiscal Year 2007, concluded in 2008 that there is currently no single authority responsible for national security space—which includes GPS—below the President and that within DOD authorities are spread among a variety of organizations, including the Office of the Secretary of Defense, the Air Force, the other military services, the Missile Defense Agency, and the National Reconnaissance Office with no effective mechanism to arrive at a unified budget and set priorities. A study chartered by the House Select Committee on Intelligence also recently found leadership for space acquisitions to be too diffused at higher levels and that there are critical shortages in skilled program managers. While recent studies have made recommendations for strengthening leadership for space acquisitions, no major changes to the leadership structure have been made in recent years. In fact, an “executive agent” position within the Air Force which was designated in 2001 to provide leadership has not been filled since the last executive resigned in 2005. GPS IIF acquisition problems have not been as extreme as those experienced on other efforts such as the Space Based Infrared System (SBIRS) and the National Polar-orbiting Operational Environmental Satellite System (NPOESS). At the same time, however, the program was not as technically complex or ambitious as these efforts. The Air Force is taking measures to prevent the problems experienced on the GPS IIF program from recurring on the GPS IIIA program. However, the Air Force will still be challenged to deliver IIIA on time because the satellite development schedule is compressed. The Air Force is taking the following measures: using incremental or block development, where the program would follow an evolutionary path toward meeting needs rather than attempting to satisfy all needs in a single step; using military standards for satellite quality; conducting multiple design reviews, with the contractor being held to military standards and deliverables during each review; exercising more government oversight and interaction with the contractor and spending more time at the contractor’s site; and using an improved risk management process, where the government is an integral part of the process. In addition, the Under Secretary of Defense for Acquisition, Technology, and Logistics specified additional guidance for the GPS IIIA program. This includes reevaluating the contractor incentive/award fee approach; providing a commitment from the Air Force to fully fund GPS IIIA in Program Objectives Memorandum 2010; funding and executing recommended mitigation measures to address the next generation operational control segment and the GPS IIIA satellites; combining the existing and new ground control segment levels of effort into a single level of effort, giving the Air Force greater flexibility to manage these efforts; not allowing the program manager to adjust the GPS IIIA program scope to meet increased or accelerated technical specifications, system requirements, or system performance; and conducting an independent technology readiness assessment of the contractor design once the preliminary design review is complete. Table 2 below highlights the major differences in the framework between the GPS IIF and GPS III programs. While these measures should put the GPS IIIA program on sounder footing, the program is facing serious obstacles—primarily in terms of its ability to deliver satellites on schedule. At present, the GPS IIIA program is on schedule and program officials contend that there is no reason to assume that a delay is likely to occur. They point out that the Air Force is implementing an incremental development approach and GPS IIIA, the first increment of GPS III, is not expected to be as technically challenging as other space programs. In addition, program officials point out that the Air Force began risk reduction activities in 1998, and has made a concerted effort to exert more oversight over its contractors and ensure key decisions are backed by sufficient knowledge about technologies, design, and production. We recognize that these steps offer the best course for GPS to be completed on time. However, we believe there is still considerable risk that the schedule may not be met for the following reasons. First, the GPS IIIA program got off to a late start. The program was originally scheduled to begin development in August 2007. However, according to GPS program officials, the Air Force shifted funds from GPS III to other commitments in its space portfolio and to address problems in other programs. The Defense Space Acquisition Board approved formal initiation of the GPS IIIA acquisition in May 2008. Second, when compared to other DOD satellite programs, the GPS IIIA program schedule appears highly compressed. The Air Force is planning to launch the first GPS IIIA satellite in 2014 to sustain the GPS constellation. To launch in 2014, the Air Force has scheduled 72 months from contract award to first satellite launch. This schedule is 3 years shorter than the schedule the Air Force has so far achieved under its IIF program. In fact, the time period between contract award and first launch for GPS IIIA is shorter than most other major space programs we have reviewed (see fig. 3). Moreover, GPS IIIA is not simply a matter of replicating the IIF program. Though the contractor has had previous experience with GPS, it is likely that the knowledge base will need to be revitalized. The contractor is also being asked to develop a larger satellite bus to accommodate future GPS increments IIIB and IIIC. In addition, the contractor is being asked to increase the power of a new military signal by a factor of 10. In our opinion, there is little room in the schedule to accommodate difficulties the contractor may have in meeting either challenge. In addition, the GPS III program office still has not been able to fill critical contracting and engineering positions needed to assist in satellites design and contractor oversight—both of which functions are to receive more emphasis on this program than in the past. Consequently, the concerns that GPS IIIA could experience a delay are not unreasonable. However, according to DOD officials, the incremental approach to GPS acquisition should significantly lower the risk of schedule delays. Nonetheless, no major satellite program undertaken in the past decade has met its scheduled goals. Third, we compared the Air Force’s GPS IIIA schedule to best practices associated with effective schedule estimating. Past GAO work has identified nine practices associated with effective schedule estimating. We analyzed the Air Force’s GPS IIIA schedule according to these practices and found that one was met, one was not met, and the other seven practices were only partially met. The practices deal with how well the schedule identifies key development activities, the times to complete these activities, as well as the amount of float time associated with each of these activities—float time is the amount of time a task can slip before affecting the critical path. Further, the practices assess how well activities have been integrated with other tasks and whether reserve times have been allocated to high-risk activities. The primary purpose of all scheduling activities is to establish a credible critical path. The best practices have been designed to support that goal. Because the GPS IIIA schedule does not follow all of the best practices, the reliability of the critical path identified in the schedule is diminished. Delays in the launch of the GPS IIIA satellites will increase the risk that the GPS constellation will decrease in size to a level where it will not meet some users’ needs. If the GPS constellation falls below the number of satellites required to provide the level of GPS service that the U.S. government has committed to providing, some military and civilian operations could be affected. DOD is evaluating different approaches that could potentially mitigate the gap. However, procurement of additional GPS IIF satellites does not appear to be feasible. The performance standards for both (1) the standard positioning service provided to civil and commercial GPS users and (2) the precise positioning service provided to military GPS users commit the U.S. government to at least a 95 percent probability of maintaining a constellation of 24 operational GPS satellites. Because there are currently 31 operational GPS satellites of various blocks, the near-term probability of maintaining a constellation of at least 24 operational satellites remains well above 95 percent. However, DOD predicts that over the next several years many of the older satellites in the constellation will reach the end of their operational life faster than they will be replenished, and that the constellation will, in all likelihood, decrease in size. Based on the most recent satellite reliability and launch schedule data approved in March 2009, the estimated long-term probability of maintaining a constellation of at least 24 operational satellites falls below 95 percent during fiscal year 2010 and remains below 95 percent until the end of fiscal year 2014, at times falling to about 80 percent. See figure 4 for details. The probability curve in figure 4 was generated using unique reliability curves for each operational satellite in the current on-orbit GPS constellation, and block-specific reliability curves for each production (unlaunched) GPS satellite, including IIR-M, IIF, IIIA, IIIB, and IIIC satellites. (See app. I for a more complete description of the approach used to generate this probability curve.) Because the reliability curves associated with new blocks of GPS satellites are based solely on engineering and design analysis instead of actual on-orbit performance, this estimated long-term probability of maintaining a 24-satellite constellation could change once actual on-orbit performance data become available. For example, while the block IIA satellites were designed to last only 7.5 years on average, they have actually lasted about twice as long. If GPS IIF satellites were to last twice as long as their currently estimated mean life expectancy of 11.5 years, the probability of maintaining a larger constellation would increase, but the long-term probability of maintaining the 24-satellite constellation would not improve significantly. Moreover, program officials provided no evidence to suggest that the current mean life expectancy for IIF satellites is overly conservative. A delay in the production and launch of GPS III satellites could severely impact the U.S. government’s ability to meet its commitment to maintain a 24-satellite GPS constellation. The severity of the impact would depend upon the length of the delay. For example, a 2-year delay in the production and launch of the first and all subsequent GPS III satellites would reduce the probability of maintaining a 24-satellite constellation to about 10 percent by around fiscal year 2018. This significant gap in service would persist for about 2 years before the constellation began to recover. Moreover, this recovery—that is, the return to a high probability of maintaining a 24-satellite constellation—would take an additional 2 to 3 years. Consequently, a 2-year delay in the production and launch of GPS III satellites would most likely result in a period of roughly 5 years when the U.S. government would be operating a GPS constellation of fewer than 24 satellites, and a 12-year period during which the government would not meet its commitment to maintaining a constellation of 24 operational GPS satellites with a probability of 95 percent or better. For example, the delay in GPS III would reduce the probability of maintaining a 21-satellite constellation to between 50 and 80 percent for the period from fiscal year 2018 through fiscal year 2020. Moreover, while the probability of maintaining an 18-satellite constellation would remain relatively high, it would still fall below 95 percent for about a year over this period. See figure 5 for details. The impacts to both military and civil users of a smaller constellation are difficult to precisely predict. For example, a nominal 24-satellite constellation with 21 of its satellites broadcasting a healthy standard positioning service signal would continue to satisfy the availability standard for good user-to-constellation geometry articulated in the standard positioning service performance standard. However, because the GPS constellation has been operating above the committed performance standard for so long, military and civil users have come to expect a higher level of service, even though this service is not committed to them. Consequently, some users may sense an operational impact even if the constellation were to perform at or near its committed standards. In general, users with more demanding requirements for precise location solutions will likely be more impacted than other users. During our interviews with military, civil, and commercial representatives, several examples of possible impacts of a smaller GPS constellation were discussed. The accuracy of precision-guided munitions that rely upon GPS to strike their targets could decrease. To accomplish their mission, military forces would either need to use larger munitions or use more munitions on the same target to achieve the same level of mission success. The risks of collateral damage could also increase. Intercontinental commercial flights use predicted satellite geometry over their planned navigation route, and may have to delay, cancel or reroute flights. Enhanced-911 services, which rely upon GPS to precisely locate callers, could lose accuracy, particularly when operating in “urban canyons” or mountainous terrain. Another important consideration is that both the standard positioning service and precise positioning service performance standards assume that users have unobstructed visibility to nearly the entire sky, an assumption that does not hold for the large number of users operating in moderately mountainous terrain, in the “urban canyons” of large cities, or under forest foliage. The Air Force is aware that there is some risk that the number of satellites in the GPS constellation could fall below its required 24 satellites, and that this risk would grow significantly if the development and launch of GPS IIIA satellites were delayed. Consequently, an Air Force Space Command representative informed us that the command has established an independent review team to examine the risks and consequences of a smaller constellation on military and civil users. However, at this time, Air Force representatives believe that the best approach to mitigating the risk is to take all reasonable steps to ensure that the current schedule for GPS IIF and III is maintained. Those steps include a commitment from the Air Force to fully fund GPS IIIA in the fiscal year 2010 Program Objectives Memorandum, and use of an incremental development approach toward meeting needs. This incremental approach would place a premium on controlling schedule risk by, among other things, deferring consideration of civil requirements for subsystems like the Distress Alerting Satellite System (DASS) and the Satellite Laser Ranging (SLR) payloads to GPS IIIB or GPS IIIC satellite blocks. Options for developing lower-cost alternatives to current GPS satellites appear to be very limited. For example, in 2007 the Air Force Scientific Advisory Board examined whether small satellites—which can be developed more quickly and at relatively low cost—might help meet some PNT mission requirements. The board concluded that small satellites may eventually have operational utility in augmenting GPS III capabilities, with emphasis on enhancing the utility of the GPS M-code signal’s capabilities against jamming. However, the need for an extensive control segment infrastructure to monitor and control these small satellite augmentations, combined with the need to develop, produce, and install user equipment, would make it very challenging to field a near-term small satellite augmentation for PNT. With respect to providing basic PNT services, the board noted that studies of PNT satellite constellations, performed at different times and by different organizations in the United States and elsewhere, demonstrate that a robust constellation of relatively powerful satellites operating at medium earth orbit is the best way to provide continuous worldwide PNT services; this is a performance set that small satellites currently cannot provide. According to Air Force representatives, the procurement of additional IIF satellites is not feasible, and initiating development of an alternative full- scale, satellite-based PNT system appears to be impractical. Such a system would likely be very expensive and would compete with GPS III development for funding, making it harder for the Air Force to meet its commitment to fully fund GPS IIIA development. Moreover, the GPS III system development contract was awarded in accordance with an approved GPS III acquisition strategy, which selected one alternative from two competing contractors’ designs; an alternative system development would be, in effect, a significant deviation from that approved strategy. Finally, it seems unlikely that the award of a separate system development contract with another contractor would have any real impact on reducing the risk of delivering GPS IIIA requirements on the current schedule. In the event that this strategy proves unsuccessful and the schedule for GPS III slips, additional measures could be considered. For example, excluding random failures, the operational life of a GPS satellite tends to be limited by the amount of power that its solar arrays can produce. This power level declines over time as the solar arrays degrade in the space environment until eventually they cannot produce enough power to maintain all of the satellite’s subsystems. However, according to Air Force representatives, the effects of this power loss can be mitigated somewhat by actively managing satellite subsystems—shutting them down when not needed— thereby reducing the satellite’s overall consumption of power. It would also be possible to significantly reduce the satellite’s consumption of power by shutting off a secondary GPS payload. This would buy additional time for the navigation mission of the satellite at the expense of the mission supported by the secondary payload. The 2004 U.S. Space- Based Positioning, Navigation and Timing (PNT) policy affirmed PNT as the primary mission for the GPS constellation, and stated that no secondary payload may adversely affect the performance, schedule, or cost of GPS, its signals, or services. Nevertheless, at this time the Air Force has no intention of shutting off the secondary GPS payload. Moreover, until there is a more immediate risk that the constellation will fall below its required size, there is no reason to take this step. Military and civil users might also take steps in response to a smaller GPS constellation. While a smaller GPS constellation could result in a significant reduction in positioning and navigation accuracy at certain times and locations, these times and locations are usually predictable in near-real time. Consequently for military users, who must rely upon GPS’s precise positioning service, a smaller constellation could require changes in its approach to mission planning to ensure that operations are conducted at times when GPS accuracy is relatively high, or changes in tactics employed during a mission. For example, military users could utilize a larger number of (or more powerful) munitions to achieve an equivalent level of mission effectiveness. For civil and commercial users, one possible impact of a smaller GPS constellation could be an increased use of other positioning, navigation, and timing services, including those expected to be offered through Europe’s Galileo system by the middle of the next decade. U.S. government officials at the various civil agencies and departments clearly understand what the government has committed to through GPS and they all have designed programs to function with this limit, with augmentations. To maximize the benefit of GPS, the deployment of its space, ground control, and user equipment capabilities must be synchronized so that the full spectrum of military assets—weapons, aircraft, and ships, for example—and individual users can take advantage of new capabilities such as added protection from jamming. However, because of funding shifts and diffuse leadership, the Air Force has not been successful in synchronizing space, ground control, and user equipment segments. As a result of the poor synchronization, new GPS capabilities may be delivered in space for years before military users can take advantage of these capabilities. The Air Force used funding set aside for the ground control segment to resolve GPS IIF development problems, causing a delay in the delivery of new ground control capabilities. The GPS ground control segment has evolved over time from the Operational Control Segment (OCS) to the current Architecture Evolution Plan (AEP). GPS IIIA satellites are to be controlled by a future ground control system called Next Generation Control Segment, or OCX. OCS was supposed to control and exploit GPS IIF space capabilities. However, because of the addition of new requirements and technical issues on the IIF program, funding was diverted from OCS to GPS IIF satellite development efforts. As a result, the delivery of new ground control capabilities will occur later than originally planned. Table 3 below illustrates satellite functions and capabilities that have yet to be made operational through the ground control segment. For example, in 2005 the Air Force began launching its GPS IIR-M satellites that broadcast a second civil signal (the L2C). Unfortunately, the ground control segment will not be able to make the second civil signal operational until late 2012 or 2013. By delaying the delivery of ground control capabilities, the Air Force has created an imbalance between the capabilities offered by GPS satellites and the ability to exploit and make operational these capabilities through the ground control segment. GPS satellites that will broadcast the modernized military signal require military user equipment capable of receiving and processing the signal so that military users can take advantage of the improved military capabilities. Before the modernized military signal can be considered initially operational, it must be broadcast from at least 18 satellites, which is expected to occur in 2013. For full operational capability, it must be broadcast from 24 satellites, which is expected to occur in 2015. Consequently, the new military signal will be made operational by the GPS satellites and ground control system in about 2013, but the warfighter will not be able to take full advantage of this new signal until about 2025— when the modernized user equipment is completely fielded. See figure 6 for our analysis of the gap between when the modernized military signal will be available on the GPS satellites and when the military services will be able to take advantage of it. The Air Force will spend the next several years developing prototype cards and production-ready receiver hardware for selected platforms within the space, air, ground, and maritime environments. Even after this is done, the services will still need to add the new user equipment to other platforms, which could take 10 or more years. This is due to the fact that the integration and installation of the new user equipment on the remaining platforms has to be coordinated with existing upgrade schedules for those platforms. As a result, the services’ ability to achieve a joint military navigation warfare capability, an essential element in conducting future military operations, may not be realized until 2025 based on user equipment delivery schedules. Funding issues are a contributing factor in the delay in fielding new user equipment. According to Air Force officials, the GPS program office focused on developing the satellites, particularly when technical problems arose. Funding was diverted from the user equipment program to the GPS satellite program to fix problems, which resulted in delays in the development and acquisition of the user equipment. Diffused leadership has been particularly problematic in terms of DOD’s ability to synchronize delivery of space, ground control, and user equipment assets. The responsibility for developing and acquiring GPS satellite and associated ground control segments and for acquiring and producing user equipment for selected platforms for space, air, ground, and maritime environments falls under the Air Force’s Space and Missile Systems Center. On the other hand, responsibility for acquiring and producing user equipment for all other platforms falls on the military services. Figure 7 illustrates how the responsibilities for developing, acquiring, and producing GPS user equipment are divided among the services. Because different military services are involved in developing user equipment for the weapon systems they own and operate, there are separate budget, management, oversight, and leadership structures over the space and ground control and the user equipment segments. As such, there is no single authority responsible for synchronizing all the procurements and fielding related to GPS. A 2008 U.S. Strategic Command Functional Solutions Analysis, conducted to provide recommendations for solutions to positioning, navigation, and timing gaps, noted that the Air Force is responsible for developing and integrating military GPS user equipment for select platforms, and that integration and testing of these platforms is required to be complete so that the user equipment is available for procurement when the military signal becomes operational. However, this analysis showed no military service program office commitment of resources for procuring military GPS user equipment in service programming documents. Furthermore, DOD’s management attention has been focused on delivering space capabilities. Only recently has DOD begun to shift its focus by recognizing that the user equipment segment needs to play an equal role in the overall GPS synchronization effort. There have been various recommendations to accelerate the fielding of modernized military user equipment, though there are obstacles in the way of implementation. In October 2005, the Defense Science Board recommended that DOD initiate an aggressive program to introduce antijam enhancements as soon as possible. In August 2006, OSD issued a GPS User Equipment Development and Procurement Policy, which mandated that certain equipment categories have the modernized GPS user equipment by the time the 24th military code satellite is declared operational. In June 2007, representatives from the Combatant Commands, U.S. Strategic Command, and U.S. Joint Forces Command requested that an aggressive schedule be established for all GPS segments to achieve military code initial operational capability by fiscal year 2013. In March 2008, the Joint Requirements Oversight Council recommended that the Air Force adjust the development and acquisition of the modernized GPS user equipment to ensure that warfighters can use space-based capabilities. Recommendations included amending programmatic schedules and funding profiles to incorporate military code capabilities at or before the initial operational capability date. To accelerate the delivery of the new user equipment, the Air Force increased the user equipment budget by $272 million for fiscal years 2009 through 2011. In the conference reports accompanying the Department of Defense Appropriation Act for Fiscal Year 2008 and the National Defense Authorization Act for Fiscal Year 2008, conferees recommended an additional $63.2 million in funding for GPS user equipment. However, the additional funds will not speed up development of the new user equipment to a large extent, because the program office is experiencing technical issues in developing the prototype cards. The major technical issue is with the difficulty in moving to a new security architecture, Protection of Navigation, which will provide information assurance. According to a GPS program office official, OSD, the Air Staff, U.S. Strategic Command, Air Force Space Command, and the GPS program office are looking at ways to get some of the modernized military user equipment to the field sooner. However, there are challenges with this approach, particularly because certain security requirements—antispoof, antijam, and antitamper—should be met before user equipment can be fielded in conflict situations. According to an official at the GPS program office, meeting these security requirements is proving to be technically challenging, and attempting this at an accelerated rate is risky. GPS has produced dramatic economic and security improvements both for the United States and globally. Ensuring that it can continue to do so is extremely challenging given competing interests, the span of government and commercial organizations involved with GPS, and the criticality of GPS to national and homeland security and the economy. On the one hand, DOD must ensure military requirements receive top priority and the program stays executable. In doing so, it must ensure that the program is not encumbered by requirements that could disrupt development, design, and production of satellites. On the other hand, there are clearly other enhancements that could be made to GPS satellites that could serve a variety of vital missions—particularly because of the coverage GPS satellites provide—and there is an expressed desire for GPS to serve as the world’s preeminent positioning, navigation, and timing system. In addition, while the United States is challenged to deliver GPS on a tight schedule, other countries are designing and developing systems that provide the same or enhanced capabilities. Ensuring that these capabilities can be leveraged without compromising national security or the preeminence of GPS is also a delicate balancing act that requires close cooperation between DOD, the Department of State, and other institutions. Because of the scale and number of organizations involved in maximizing GPS, we did not undertake a full-scale review of requirements and coordination processes. However, we reviewed documents supporting these processes and interviewed a variety of officials to obtain views on its effectiveness. While there is a consensus that DOD and other federal organizations involved with GPS have taken prudent steps to manage requirements and optimize GPS use, we also identified challenges in the areas of ensuring civilian requirements can be met and ensuring that GPS is compatible with other new, potentially competing global space-based positioning, navigation, and timing systems. The 2004 U.S. Space-Based Positioning, Navigation and Timing (PNT) policy provides guidance for civil involvement in the development of requirements for the modernization of GPS capabilities and the requirements process includes an entry point for civil requirements. This entry point is the Interagency Forum for Operational Requirements (IFOR), working groups consisting of a civil and a military panel. The IFOR receives proposed GPS requirements from civil agencies and assists in developing and validating them. From this point, the proposed requirement follows a DOD and civil path to validation with involvement from various interagency boards and councils. Figure 8 illustrates this formal process for submitting, considering, and validating civil GPS requirements. While the process for approving civil requirements on GPS has existed since 2001, DOD and civil agencies consider it rigorous but relatively untested because no civil unique requirements have completed the initial step in the process. Civil agencies have submitted two proposed requirements to the process; however, these requirements are not directly related to the GPS mission. Instead, they would add hardware to the GPS satellites and thus are considered secondary mission requirements. However, according to civil agencies, the analyses and documentation called for under the process are confusing and time-consuming. While GPS remains critical to national security and military operations, government policy calls for GPS planning to consider integration of civil requirements for the civilian infrastructure. The process for considering civil GPS requirements is intended to maintain fiscal discipline by ensuring only critical needs are funded and developed. Specifically, the process requires that civil agencies internally identify and validate their proposed requirements, and conduct cost, risk, and performance analyses. Our past work has shown that requirements add-ons are a major source of acquisition instability. In this case, the formal process also requires that the agency proposing the requirement pay the costs associated with adding it to the GPS III satellites, thereby forcing agencies to separate their wants from needs. According to the civil agencies that have proposed GPS requirements, the formal requirements approval process is confusing and time-consuming. Specifically, they stated that DOD’s documentary and analysis standards are new to civil agencies and therefore difficult and time-consuming for them to manage. Some agencies have reported that it is costly for them to pay for the more detailed supporting analyses requested by DOD. For example, one civil agency had to withdraw and resubmit a proposal for new GPS requirements because it lacked necessary information, including a cost-benefit analysis. Furthermore, civil agencies’ submitted requirements have necessitated that DOD perform further studies on compatibility and integration issues to ensure that the proposed requirements will not adversely affect the primary GPS mission. The two civil requirements that have entered the requirements process are the Distress Alerting Satellite System (DASS) and the geodetic requirement implemented by Satellite Laser Ranging (SLR). Both are joint civil and military mission requirements and would be potential secondary payloads on GPS. DASS is an electronic unit that will receive beacon signals identifying a distressed individual’s location and transmit this location data to emergency responders. The SLR laser retroreflector, which weighs less than 7 pounds, is being considered for inclusion starting with increment IIIB satellites. Scientists would aim a laser to the reflector to more precisely determine the satellite’s position, ultimately allowing for more precise measurements on the ground. This SLR capability would support users who need to make very accurate measurements for scientific applications. Distress Alerting Satellite System: The Coast Guard submitted the DASS requirement to the IFOR in 2003. Early in the review process, a debate on whether DASS was a civil or military requirement ensued. The IFOR decided to have military and civil panels review the requirement and resubmit it through the Joint Capabilities Integration and Development System (JCIDS) process. It took a total of 5 years to resolve the debate and prepare and resubmit the package. In July 2008, the civil agencies submitted DASS requirements and an analysis of alternatives to the IFOR for review. To date, a decision has not yet been made as to if and when the capability will be inserted on GPS satellites. Satellite Laser Ranging: In April 2007, NASA submitted the SLR requirements package along with an analysis of alternatives to the IFOR. The IFOR officially accepted the SLR package into the IFOR process in August of that year. However, in June 2008, DOD opposed implementation of the SLR capability due to integration and compatibility concerns with the GPS satellites. A joint Air Force and NASA working group was established to resolve the integration and compatibility issues and report back to the IFOR by June 2009 prior to moving the requirement from the IFOR into the JCIDS process. DASS supporters have stated that the GPS constellation is the ideal platform for search and rescue capabilities. The current search and rescue capability is expected to degrade by 2017 and completely fail by 2020. More urgently, supporters say that the Canadian government’s offer to provide DASS hardware at a $90 million cost savings to the United States must be acted upon by August 2009 or Canada may provide this component to a developing foreign satellite navigation system. The SLR capability, until recently, existed on two GPS satellites. One satellite was decommissioned, and hence according to NASA does not meet its or other civil agencies’ needs to perform scientific and geodetic applications. According to NASA, the SLR would need to be implemented on most of the GPS constellation to meet geodetic requirements for science and other user requirements. If the DOD does not include DASS and SLR on GPS satellites, U.S. users of these capabilities may be dependent on foreign systems which already include, or have plans to include, both DASS-like and SLR capabilities in their satellite navigation systems. The U.S. government—specifically the State Department—is faced with challenges in ensuring GPS is compatible and interoperable with other new, potentially competing global space-based positioning, navigation, and timing systems. While the U.S. government has engaged a number of other countries and international organizations in cooperative discussions, only one legally binding agreement has been established. Furthermore, some U.S. manufacturers of GPS receivers stated that European Union manufacturers may have a competitive advantage over U.S. companies with respect to the manufacture and sale of Galileo-capable receivers, though officials with the European Commission disagree. In addition, Department of State officials have expressed concerns over the limited number of technical experts available to support activities under these cooperative arrangements. Without these resources, officials are concerned that it may be difficult to continue to ensure the compatibility and interoperability of foreign systems. The United States has made joint statements of cooperation with Australia, India, Japan, and Russia to promote compatibility and interoperability and mutual interests regarding the civil use of GPS and its augmentations and established an executive agreement with the European Community (see table 4 for a list of types of cooperative arrangements with other countries). The joint statements and executive agreement were sought to avoid interference with each others’ systems, and to facilitate the pursuit of common civil signals. Under the national space-based PNT policy, it is the Department of State’s role to promote the civil aspects of GPS and its augmentation services and standards with foreign governments and other international organizations. The Department of State leads negotiations with foreign governments and international organizations regarding civil and, as appropriate, military space-based PNT matters including, but not limited to, coordinating interagency review of international agreements with foreign governments and international organizations regarding the planning, operation, management, and or use of the GPS and its augmentations. While most of the cooperative arrangements are joint statements that express the parties’ intent to cooperate on GPS-related activities, the United States and the European Commission have established an executive agreement that is considered binding under international law. According to the executive agreement with the European Community, subject to applicable export controls, the United States and the European Commission are to make sufficient information concerning their respective civil satellite-based signals and augmentations publicly available on a nondiscriminatory basis, to ensure equal opportunity for persons who seek to use these signals, manufacture equipment to use these signals, or provide value-added services which use these signals. In 2006, the European Commission publicly released draft technical specifications for its open service. The draft document requests manufacturers to obtain a commercial license from the European Commission to sell and import products designed to work with the European satellite navigation system, Galileo. While this licensing requirement applies to all manufacturers, some U.S. companies stated that some foreign user equipment manufacturers who are members of the Galileo consortia may have an unfair advantage over U.S. companies. This is because the Galileo consortia currently have access to testing hardware and may be able to introduce their products more quickly into the marketplace once they are granted a commercial license. Officials with the European Commission told us that they do not believe the license restrictions or the knowledge gained from testing the Galileo systems are discriminatory. They further stated that the restrictions in obtaining a commercial license to sell user equipment apply to all companies, not just U.S. companies and they have not yet issued licenses to any company. In the meantime, a U.S. and European Commission working group on trade and civil applications is discussing the licensing issue. However, U.S. firms have raised concerns to the Department of Commerce (Commerce) on the lack of information from the European Commission relating to the process for obtaining a license to sell Galileo equipment. According to Commerce, U.S. firms have asserted that they are not aware of how, where, or when to apply for such a license, despite repeated inquiries to the U.S.-European Commission trade working group and direct contacts with European Commission officials—and the timeline for the licensing process is unknown. Commerce further noted that U.S. manufacturers wanting to enter the Galileo market are hesitant to invest in technology that is not officially licensed and that could possibly be banned from sale. It takes industry 18 to 24 months to develop a market-ready receiver, and the first operational Galileo satellite is scheduled for launch in 2010. U.S. firms are concerned they will not have their products ready by that time and will lose their market share to European companies with inside access to technology and/or licensing information. According to Department of State officials, the department lacks dedicated technical expertise to monitor international activities. The Department of State relies on a small pool of experts from DOD and the seven civil agencies represented on the National Executive Committee for Space- Based PNT. These experts are often in high demand because they work on other GPS-related activities and in some cases have other assigned duties that are unrelated to GPS. According to the Department of State, in many cases these experts and those in other agencies must continually justify to their managers that their attendance at international meetings is important. Given the progress made in working with foreign governments to establish arrangements, share information, and ensure compatibility and interoperability with GPS, Department of State officials would like DOD and civil agencies to dedicate funding and staff positions to international activities accompanied by a sustained level of senior management support and understanding of the importance of these activities. Without an expanded pool of technical expertise and related resources, Department of State officials stated they are concerned that ongoing international efforts to ensure compatibility of foreign systems with GPS could be jeopardized. GPS has enabled transformations in military, civil, other government, and commercial operations and has become part of the critical infrastructure serving national and international communities. Clearly, the United States cannot afford to see its GPS capabilities decrease below its requirement, and optimally, it would stay preeminent. Over the past decade, however, the program has experienced cost increases and schedule delays. While the Air Force is making a concerted effort to address acquisition problems, there is still considerable risk that satellites will not be delivered on time, leading to gaps in capability. Focused attention and oversight are needed to ensure the program stays on track and is adequately resourced, that unanticipated problems are quickly discovered and resolved, and that all communities involved with GPS are aware of and positioned to address potential gaps in service. But this is difficult to achieve given diffuse responsibility over various aspects of the GPS acquisition program. Moreover, disconnects between the space, ground control, and user equipment components have significantly lessened the military’s ability to take advantage of enhancements, particularly as they relate to assuring the continuity of service during military engagements. Without more concentrated leadership attention, such disconnects could worsen, particularly since (1) both the ground control and user equipment programs have been subject to funding shifts to pay for problems affecting the satellite segment, and (2) user equipment programs are executed by separate entities over which no one single person has authority. Lastly, ensuring that GPS can continue to produce dramatic improvements to civil agencies’ applications, calls for any weaknesses that are identified in the civil agency GPS requirements process to be addressed. Because of the criticality of the GPS system and potential delays, and given the importance of GPS to the civil community, we are making the following recommendations. We recommend that the Secretary of Defense appoint a single authority to oversee the development of the GPS system, including DOD space, ground control, and user equipment assets, to ensure that the program is well executed and resourced and that potential disruptions are minimized. The appointee should have authority to ensure DOD space, ground control, and user equipment are synchronized to the maximum extent practicable; and coordinate with the existing positioning, navigation, and timing infrastructure to assess and minimize potential service disruptions should the satellite constellation decrease in size for an extended period of time. We recommend that the Secretaries of Defense and Transportation, as the co-chairs of the National Executive Committee for Space-Based Positioning, Navigation and Timing, address, if weaknesses are found, civil agency concerns for developing requirements, and determine mechanisms for improving collaboration and decision making and strengthening civil agency participation. DOD concurred with our first recommendation to appoint a single authority to oversee the development of the GPS system, including space, ground control, and user equipment assets, to ensure that the program is well executed, resourced, and that potential disruptions are minimized. DOD stated that it has recognized the importance of centralizing authority to oversee the continuing synchronized evolution of the GPS. According to DOD, the Deputy Secretary of Defense has reaffirmed that the Assistant Secretary of Defense for Networks and Information Integration (ASD NII)) is designated with authority and responsibility for all aspects of the GPS. DOD further stated that the U.S. Air Force is the single acquisition agent with responsibility for synchronized modernization of GPS space, ground control, and military user equipment. In concurring with our recommendation on appointing a single authority to oversee the development of the GPS system, DOD asserts that ASD NII is designated with authority and responsibility for all aspects of GPS, and that the Air Force is the single acquisition agent responsible for synchronizing GPS segments. In addition, responsibility for acquiring GPS military user equipment acquisitions falls under various officials within the military services. We agree that given the diversity of platforms and equipment variations involved, it would not be realistic for the Air Force to unilaterally produce a “one-size-fits-all” solution. However, this does not obviate the need for a single authority to oversee the development of all GPS military user equipment to better ensure greater coordination with deployed satellite capabilities. Without an approach that enables a single individual to make resource decisions and maintain visibility over progress, DOD is at risk of facing the same issues in synchronizing the delivery of GPS assets and wasting capability that will be available in space but not on the ground. In addition, DOD may still want to consider establishing a means by which progress in developing the satellites and ground equipment receives attention from the highest levels of leadership that is the Secretary and perhaps the National Security Council, given the criticality of GPS to the warfighter and the nation, and the risks associated with not meeting schedule goals. DOD concurred with our second recommendation to address, if weaknesses are found, civil agency concerns for developing requirements and determine mechanisms for improving collaboration and decision making, and strengthening civil agency participation. DOD acknowledged that it employs a rigorous requirements process and is aware of the frustration civil agencies face when using this process. DOD further indicated that it worked to put in place an interagency requirements plan, and is currently in the process of jointly coordinating the Charter for an Interagency Forum for Operational Requirements to provide venues to identify, discuss, and validate civil or dual-use GPS requirements. Finally, DOD noted that it will continue to seek ways to improve civil agency understanding of the DOD requirements process and work to strengthen civil agency participation. We support DOD’s efforts to inform and educate other civil agencies on the requirements process. As it undertakes these efforts, DOD should ensure that it is taking a more active role in directly communicating with civil agencies to more precisely identify concerns or weaknesses in the requirements process. The full text of DOD’s comments may be found in appendix IV. We also received technical comments from the other departments and NASA, which we incorporated where appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 8 days from the report date. At that time, we will send copies of this report to the Secretaries of Defense, Agriculture, Commerce, Homeland Security, Interior, State, and Transportation; the National Aeronautics and Space Administration; and interested congressional committees. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report or need additional information, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. The major contributors are listed in appendix V. To assess the Global Positioning System (GPS) satellite, ground control, and user equipment acquisition programs and determine whether GPS capabilities are being synchronized, we reviewed and analyzed program plans and documentation related to cost, schedule, requirements, program direction, and satellite constellation sustainment, and compared programmatic data to GAO’s criteria compiled over the last 12 years for best practices in system development. We also interviewed officials from Air Force Space and Missile Systems Center GPS program office; Air Force Space Command; Office of the Joint Chiefs of Staff; Office of the Undersecretary of Defense for Acquisition, Technology, and Logistics; Assistant Secretary of Defense Office of Networks and Information Integration; United States Strategic Command; 2nd Space Operations Squadron; and the services. To determine the extent to which the Air Force had effectively developed and maintained the GPS IIIA integrated master schedule, we reviewed the program’s schedule estimates and compared them with relevant best practices to determine the extent to which they reflects key estimating practices that are fundamental to having a reliable schedule. In doing so, we interviewed GPS program officials to discuss their use of best practices in creating the program’s current schedule. To assess the status of the GPS constellation, we interviewed officials from the Air Force Space and Missile Systems Center GPS program office, Air Force Space Command, and the 2nd Space Operations Squadron. To assess the risks that a delay in the acquisition and fielding of GPS III satellites could result in the GPS constellation falling below the 24 satellites required by the standard positioning service and precise positioning service performance standards, we obtained information from the Air Force predicting the reliability for 77 GPS satellites—each of the 31 current (on-orbit) and 46 future GPS satellites—as a function of time. Each satellite’s total reliability curve defines the probability that the satellite will still be operational at a given time in the future. It is generated from the product of two reliability curves—a wear-out reliability curve defined by the cumulative normal distribution, and a random reliability curve defined by the cumulative Weibull distribution. For each of the 77 satellites, we obtained the two parameters defining the cumulative normal distribution, and the two parameters defining the cumulative Weibull distribution. For each of the 46 unlaunched satellites, we also obtained a parameter defining its probability of successful launch, and its current scheduled launch date. The 46 unlaunched satellites include 2 IIR-M satellites, 12 IIF satellites, 8 IIIA satellites, 8 IIIB satellites, and 16 IIIC satellites; launch of the final IIIC satellite is scheduled for March 2023. Using this information, we generated overall reliability curves for each of the 77 GPS satellites. We discussed with Air Force and Aerospace Corporation representatives, in general terms, how each satellite’s normal and Weibull parameters were calculated. However, we did not analyze any of the data used to calculate these parameters. Using the reliability curves for each of the 77 GPS satellites, we developed a Monte Carlo simulation to predict the probability that at least a given number of satellites would be operational as a function of time, based on the GPS launch schedule approved in March 2009. We conducted several runs of our simulation—each run consisting of 10,000 trials—and generated “sawtoothed” curves depicting the probability that at least 21, 24, 27, and 30 satellites would still be operational as a function of time. We compared the results for a 24-satellite constellation with a similar Monte Carlo simulation that the Aerospace Corporation performed for the Air Force. We confirmed that our simulation produces results that are within about 2 percent of the Aerospace Corporation’s results for all times between October 2008 and April 2024. Using 10,000 trials per run, the results of different runs of the same Monte Carlo simulation can vary by about 1 to 2 percent; consequently we concluded that we had successfully replicated the Aerospace Corporation’s results. We then used our Monte Carlo simulation model to examine the impact of a 2-year delay in the launch of all GPS III satellites. We moved each GPS III launch date back by 2 years. We then reran the model and calculated new probabilities that at least 18, 21, and 24 satellites would still be operational as a function of time. To assess impacts of a potential GPS service disruption on particular types of military and civil GPS users, we interviewed numerous military and civil GPS representatives and reviewed studies provided by civil agencies. To assess the coordination and collaboration among federal agencies and the broader GPS community, and to determine the organization of the PNT community, we analyzed documents from and conducted interviews with officials in Washington, D.C. at the Office of the Assistant Secretary of Defense for Networks and Information Integration; SAF/USA (Air Force Directorate of Space Acquisitions); National Aeronautics and Space Administration; the Departments of Transportation, State, Commerce, and Homeland Security; the Space-Based National PNT Coordination Office; and the U.S. GPS Industry Council. We also interviewed a private sector GPS expert at Stanford University, and GPS industry representatives. To analyze how the U.S. government coordinates with foreign countries on GNSS (Global Navigation Satellite Systems), we met with representatives of and reviewed documents from the U.S. Department of State and European Space Agency (ESA) in Washington, D.C. To obtain information on efforts by Australia, China, Japan, and Russia to develop GNSS, we met with Department of State officials, reviewed materials provided by these countries’ representatives at GNSS conferences, and consulted the official government space agency Web sites. We also traveled to Europe to meet with experts in satellite navigation at the European Space Agency, French Space Agency (CNES), European Commission Directorate-General for Energy and Transport Satellite Navigation Unit, and European GNSS industry experts. In addition, we attended a conference in Berlin, Germany to learn about international coordination on PNT systems and applications. We conducted this performance audit from October 2007 to April 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the Global Positioning System (GPS), there are other space- based global navigation satellite systems (GNSS) in operation and in development. Russia has a system, GLONASS (Global Navigation Satellite System). There are currently 20 GLONASS satellites in orbit, and the Russians expect to have a full constellation of 24 satellites in orbit by 2010 and ultimately to expand to a 30-satellite constellation. The European Union (EU) is developing its own GNSS program, Galileo. Originally started as a public-private partnership, the program now is completely funded by the public sector. The EU has 2 test satellites in orbit now, and plans to have a 27-satellite constellation with 3 spares by 2013. China also is in the process of developing its own GNSS, Compass (also called Beidou). China currently has 3 satellites in orbit, and plans to increase the constellation for coverage of the Asia-Pacific region by 2010 and for worldwide coverage by 2015. Table 5 lists the non-U.S. global navigation satellite systems currently in development. During 2007, the Department of State signed joint statements of cooperation in the use of the Global Positioning System (GPS) with Australia and India. The Australia joint statement expresses the parties’ intention to promote interoperability between GPS and Australia’s Ground-based Regional Augmentation System and Ground Based Augmentation System. The India joint statement expressed the parties’ intention to promote GPS and India’s GPS and GEO-Augmented Navigation system. An executive agreement with the European Community and its member states has been in effect since 2004 that expresses the intention that GPS and Galileo will be interoperable at the user level for the benefit of civil users around the world. This cooperation has resulted in working groups that are reviewing technical, trade, and security issues. The technical issues described in the executive agreement involve GPS-Galileo radio frequency compatibility and interoperability and the design and development of the next generation of systems. For trade, a working group is determining how to maintain nondiscriminatory trade practices in the global market for goods and services related to space- based PNT, and a group was appointed to review the security issues concerning GPS and Galileo. The United States and Russia initiated cooperation in 2004, with the parties expressing their intent to work together to maintain and promote civil interoperability at the user level between GPS and Russia’s GLONASS system. Two working groups have been established to address: (1) radio frequency compatibility and interoperability for enhanced PNT and (2) technical interoperability between the search-and-rescue capabilities planned for GPS and GLONASS. The United States and Japan have had a relationship since signing a joint statement in 1998. In the joint statement, the parties expressed their intent to promote and facilitate civilian uses of GPS. Japan is developing MTSAT- based Satellite Augmentation System (MSAS), a geostationary satellite system similar to the U.S. Wide Area Augmentation System. The United States and Japan most recently met in November 2008 to discuss the civil use of GPS and Japan’s MSAS and Quasi-Zenith Satellite System. GAO DRAFT REPORT DATED MARCH 12, 2009 GAO-09-325 (GAO CODE 120696) “THE GLOBAL POSITIONING SYSTEM: SIGNIFICANT CHALLENGES IN SUSTAINING AND UPGRADING WIDELY USED CAPABILITIES” RECOMMENDATION 1: The GAO recommends that the Secretary of Defense appoint a single authority to oversee the development of the Global Positioning System (GPS) system, including space, ground, and user assets, to ensure that the program is well executed and resourced and that potential disruptions are minimized. (p. 43/GAO Draft Report) DOD RESPONSE: Concur with comment. The Department has recognized the importance of centralizing authority to oversee the continuing synchronized evolution of the GPS. To that end, the Deputy Secretary of Defense has reaffirmed that the Assistant Secretary of Defense for Networks and Information Integration (ASD(NII)) is the Department’s Principal Staff Assistant to oversee Positioning, Navigation, and Timing, and, specifically, is designated with authority and responsibility for all aspects of the Global Position System (GPS). This designation is contained in Department of Defense Directive (DoDD) 4650.05, issued on February 19, 2008. A formal Department of Defense Instruction is now in final coordination to further define the oversight processes to be employed in executing DoDD 4650.05, and completion is expected by May 2009. Further, under oversight of the ASD(NII), the U.S. Air Force is the single acquisition agent with responsibility for synchronized modernization of GPS space, ground control, and military user equipment. The Air Force acquires and operates the GPS space and control segments and provides the fundamental system design and security requirements necessary for acquisition of GPS user equipment and applications in support of diverse missions across the Department. Given the diversity of platforms, and equipment form factors involved, it is impossible for the Air Force to unilaterally produce a “one-size-fits-all” solution applicable to all DoD missions. RECOMMENDATION 2: The GAO recommends that Secretary of Defense, as one of the Position Navigation and Timing executive committee co-chairs, address, if weaknesses are found, civil agency concerns for developing requirements and determine mechanisms for improving collaboration and decision making and strengthening civil agency participation. (p. 43/GAO Draft Report) DOD RESPONSE: Concur with comment. The Department is aware that we employ a rigorous requirements process in support of our extensive operational and acquisition responsibilities and that the process is a source of frustration for civil agencies without similar processes in place. In an effort to address the issue, we have worked with the civil agencies to put in place a GPS Interagency Requirements Plan, jointly approved by the Vice Chairman of the Joint Chiefs of Staff, who is in charge of our process, and the Department of Transportation (DOT), acting on behalf of all civil agencies. Further, we are now in the process of jointly coordinating the Charter for an Interagency Forum for Operational Requirements (IFOR) to provide meeting venues to identify, discuss, and validate civil or dual use GPS requirements for inclusion in the DoD GPS acquisition process. Finally, we sponsor educational outreach opportunities for civil agencies to become more fully acquainted with the DoD requirements process, including a day- long “Requirements Process Summit” jointly conducted by the Joint Staff and DOT on April 29, 2008. We will continue to seek ways to improve civil agency understanding of the DoD requirements process and work to strengthen civil agency participation. In addition to the contact named above, key contributors to this report were Art Gallegos (Assistant Director), Greg Campbell, Jennifer Echard, Maria Durant, Anne Hobson, Laura Hook, Sigrid McGinty, Angela Pleasants, Jay Tallon, Hai Tran, and Alyssa Weir. Best Practices: Increased Focus on Requirements and Oversight Needed to Improve DOD’s Acquisition Environment and Weapon System Quality. GAO-08-294. Washington, D.C.: February 1, 2008. Best Practices: An Integrated Portfolio Management Approach to Weapon System Investments Could Improve DOD’s Acquisition Outcomes. GAO-07-388. Washington, D.C.: March 30, 2007. Best Practices: Stronger Practices Needed to Improve DOD Technology Transition Processes. GAO-06-883. Washington, D.C.: September 14, 2006. Best Practices: Better Support of Weapon System Program Managers Needed to Improve Outcomes. GAO-06-110. Washington, D.C.: November 1, 2005. Best Practices: Setting Requirements Differently Could Reduce Weapon Systems’ Total Ownership Costs. GAO-03-57. Washington, D.C.: February 11, 2003. Best Practices: Capturing Design and Manufacturing Knowledge Early Improves Acquisition Outcomes. GAO-02-701. Washington, D.C.: July 15, 2002. Best Practices: Better Matching of Needs and Resources Will Lead to Better Weapon System Outcomes. GAO-01-288. Washington, D.C.: March 8, 2001. Best Practices: A More Constructive Test Approach Is Key to Better Weapon System Outcomes. GAO/NSIAD-00-199. Washington, D.C.: July 31, 2000. Best Practices: DOD Training Can Do More to Help Weapon System Programs Implement Best Practices. GAO/NSIAD-99-206. Washington, D.C.: August 16, 1999. Best Practices: Better Management of Technology Development Can Improve Weapon System Outcomes. GAO/NSIAD-99-162. Washington, D.C.: July 30, 1999. Best Practices: Successful Application to Weapon Acquisition Requires Changes in DOD’s Environment. GAO/NSIAD-98-56. Washington, D.C.: February 24, 1998. Best Practices: Commercial Quality Assurance Practices Offer Improvements for DOD. GAO/NSIAD-96-162. Washington, D.C.: August 26, 1996.
The Global Positioning System (GPS), which provides positioning, navigation, and timing data to users worldwide, has become essential to U.S. national security and a key tool in an expanding array of public service and commercial applications at home and abroad. The United States provides GPS data free of charge. The Air Force, which is responsible for GPS acquisition, is in the process of modernizing GPS. In light of the importance of GPS, the modernization effort, and international efforts to develop new systems, GAO was asked to undertake a broad review of GPS. Specifically, GAO assessed progress in (1) acquiring GPS satellites, (2) acquiring the ground control and user equipment necessary to leverage GPS satellite capabilities, and evaluated (3) coordination among federal agencies and other organizations to ensure GPS missions can be accomplished. To carry out this assessment, GAO's efforts included reviewing and analyzing program documentation, conducting its own analysis of Air Force satellite data, and interviewing key military and civilian officials. It is uncertain whether the Air Force will be able to acquire new satellites in time to maintain current GPS service without interruption. If not, some military operations and some civilian users could be adversely affected. In recent years, the Air Force has struggled to successfully build GPS satellites within cost and schedule goals; it encountered significant technical problems that still threaten its delivery schedule; and it struggled with a different contractor. As a result, the current IIF satellite program has overrun its original cost estimate by about $870 million and the launch of its first satellite has been delayed to November 2009--almost 3 years late. Further, while the Air Force is structuring the new GPS IIIA program to prevent repeating mistakes made on the IIF program, the Air Force is aiming to deploy the next generation of GPS sa this schedule is optimistic, given the program's late start, past trends in space acquisitions, and challenges facing the new contracto tellites 3 years faster than the IIF satellites. GAO's analysis found that r. Of particular concern is leadership for GPS acquisition, as GAO and other studies have found the lack of a single point of authority for space programs and frequent turnover in program managers have hampered requirements setting, funding stability, and resource allocation. If the Air Force does not meet its schedule goals for development of GPS IIIA satellites, there will be an increased likelihood that in 2010, as old satellites begin to fail, the overall GPS constellation will fall below the number of satellites required to provide the level of GPS service that the U.S. government commits to. Such a gap in capability could have wide-ranging impacts on all GPS users, though there are measures the Air Force and others can take to plan for and minimize these impacts. In addition to risks facing the acquisition of new GPS satellites, the Air Force has not been fully successful in synchronizing the acquisition and development of the next generation of GPS satellites with the ground control and user equipment, thereby delaying the ability of military users to fully utilize new GPS satellite capabilities. Diffuse leadership has been a contributing factor, given that there is no single authority responsible for synchronizing all procurements and fielding related to GPS, and funding has been diverted from ground programs to pay for problems in the space segment. DOD and others involved in ensuring GPS can serve communities beyond the military have taken prudent steps to manage requirements and coordinate among the many organizations involved with GPS. However, GAO identified challenges to ensuring civilian requirements and ensuring GPS compatibility with other new, potentially competing global space-based positioning, navigation, and timing systems.
MACs process and pay claims, conduct prepayment and postpayment claim reviews, and provide Medicare fee-for-service billing education to providers in their jurisdictions. For each type of Medicare claim, the number of jurisdictions and the number of MACs that handle that type of claim vary. For Medicare Part A and B claims—handled by A/B MACs— there are 12 jurisdictions in which 8 MACs operated at the time of our review. Three of these MACs also processed home health and hospice claims in addition to Medicare A/B claims and therefore served as MACs for the four home health and hospice (HH+H) jurisdictions. For durable medical equipment (DME), including orthotics, prosthetics, and supplies— handled by DME MACs—there are four jurisdictions in which two MACs operated at the time of our review. A MAC can operate in more than one jurisdiction and handle more than one type of Medicare claim. For example, a MAC can operate as an A/B MAC in one jurisdiction and a DME MAC in another. (For maps of the 20 jurisdictions, see app. I.) The provider education department is part of a MAC’s provider customer service program, which is intended to provide timely information, education, and training to providers on Medicare fee-for-service billing, as outlined in CMS’s provider customer service program manual. The costs for MACs’ provider education departments average 2.1 to 3.3 percent of their total annual costs. MACs’ provider education department efforts are aimed at educating providers and their staff on Medicare program fundamentals, national and local policies and procedures, new Medicare initiatives, significant changes to the Medicare program, and issues identified through data analyses. Provider education departments provide education through a variety of methods, such as webinars, online tutorials available on- demand, ‘ask-the-contractor’ teleconferences, seminars at national conferences and association meetings, and website articles. These efforts are designed to educate many providers at the same time or individual providers via one-to-one education. Attendance at provider education department events is voluntary on the part of the providers. MACs are required to report their provider education department efforts monthly into the Provider Customer Service Program Contractor Information Database that CMS oversees and maintains. CMS also requires the MACs to submit a semi-annual Provider Customer Service Program Activities Report that summarizes and recounts Provider Customer Service Program activities, process improvements, and best practices during the reporting period. MACs’ medical review departments identify areas vulnerable to improper billing, review medical records to determine whether Medicare claims are medically necessary and properly documented, conduct one-to-one education as a result of claim reviews, and provide referrals to the provider education department for further education. This department frequently works with the provider education department to conduct educational efforts focusing on correcting provider billing (see fig. 1). CMS requires each MAC to identify areas vulnerable to improper billing in its jurisdiction(s) to guide MAC efforts in medical review and provider education. Areas identified by the MACs are listed in their IPRS reports. MACs’ medical review departments identify these areas by analyzing various internal and external data, such as data from CMS’s Comprehensive Error Rate Testing (CERT) program, issues identified by recovery auditors, Office of Inspector General reports, comparative billing reports, and internal MAC data. The objective of the CERT program is to estimate the payment accuracy of the Medicare fee-for-service program, which results in a Medicare fee-for-service improper payment rate. Improper payment rates are computed at multiple levels: nationally, by MAC, by service, and by provider type. According to CMS’s provider customer service program manual, MACs with improper payment rates a certain percentage above HHS’s target for determining progress toward one of its Government Performance and Results Act of 1993 (GPRA) goals may be required by CMS to submit quarterly or monthly provider education department status updates. However, CMS officials told us that they have never required any MAC to submit these quarterly or monthly status updates and they are considering removing this requirement from the manual. The probe and educate reviews are a CMS strategy to determine the extent to which providers understand recent policy changes for certain areas vulnerable to improper billing and help providers improve billing in these areas through a review of a sample of claims from every provider. Under the reviews, MAC medical review departments, with varying levels of coordination with the provider education departments, sample and review a certain number of claims from each provider to determine whether the claims were billed and documented properly. These reviews are resource intensive, because they involve manual review of associated medical records by trained medical review staff. Because of the resources involved, manual reviews are done infrequently in the Medicare program, with less than 1 percent of all Medicare claims receiving manual review. Following the first round of review, providers are informed of their results and those who billed and documented a specified percentage of claims improperly are offered voluntary one-to-one education to learn why each claim was approved or denied. Providers that billed and documented a specified percentage of claims properly are excluded from subsequent rounds of review, if any. MACs may repeat this process for subsequent rounds of review using a new sample of claims. (See fig. 2.) In addition to the areas vulnerable to improper billing identified by the MACs, CMS identified two areas vulnerable to improper billing—short- stay hospital visits and home health services—and required MACs to conduct probe and educate reviews for each of these areas. The first probe and educate review examined short-stay hospital claims to determine the extent to which certain hospitals were properly applying the “two-midnight rule” that CMS implemented effective October 1, 2013. Under the rule, hospital stays for Medicare beneficiaries spanning two or more midnights should generally be billed as inpatient hospital claims. Conversely, hospital stays not expected to span at least two midnights should generally be billed as outpatient hospital claims. From October 1, 2013, through September 30, 2015, 64,776 short-stay inpatient hospital claims were reviewed by the MACs over three rounds. Beginning on October 15, 2015, quality improvement organizations began conducting these reviews at the direction of CMS. At the direction of CMS, MACs began conducting probe and educate reviews of home health agency claims on October 1, 2015, for episodes of care that occurred on or after August 1, 2015. Round 1 was completed as of September 30, 2016, and the second round began on December 15, 2016. The purpose of these reviews is to ensure that home health agencies understand the new patient certification requirements that became effective January 1, 2015. These requirements stipulate that the referring physician, also referred to as the ordering or referring provider, must certify a patient’s eligibility for home health services as a condition of payment. As part of the certification, the referring provider must document that a face-to-face patient encounter occurred within a certain time frame. In addition, the patient’s medical record must support the certification of eligibility. MAC officials state that their provider education department efforts focus on areas vulnerable to improper billing. We found that these efforts are subject to limited oversight by CMS. Additionally, CMS does not require MACs to educate referring providers on documentation requirements for DME and home health services. MAC officials told us that their provider education departments focus education on areas vulnerable to improper billing, including those they’ve identified and listed in their annual IPRS reports. There were 278 areas listed in the IPRS reports we reviewed, and based on our analysis, some of these areas, such as skilled nursing facilities, ambulance services, and blood glucose monitors, were identified by a majority of MACs. A detailed description of the problem areas may also be identified in these IPRS reports, as illustrated by the examples below. Part A. A majority of Part A MACs reported claims from skilled nursing facilities and inpatient rehabilitation facilities as vulnerable to improper billing. Examples of reported problem areas within skilled nursing facilities included claims for individuals using an “ultrahigh” level of therapy and episodes of care greater than 90 days. Part B. A majority of Part B MACs reported claims for evaluation and management and ambulance services as areas vulnerable to improper billing. Examples of reported problem areas within the evaluation and management category included the incorrect level of coding for office visits, hospital visits, emergency room visits, and home visits for assisted living and nursing homes. DME. A majority of DME MACs reported claims for glucose monitors, urological supplies, continuous positive airway pressure (CPAP) devices, oxygen, wheelchair options and accessories, lower limb prosthetics, and immunosuppressive drugs as areas vulnerable to improper billing. An example of a reported problem area with oxygen billing was that the beneficiary medical record documentation did not provide support for symptoms that might be expected to improve with oxygen therapy. HH+H. Half of the HH+H MACs reported claims for home health therapy services and home health or hospice stays that were longer than average as areas vulnerable to improper billing. An example of a reported problem area with home health therapy services included claims from home health providers reporting a high average number of therapy visits for their patients as compared to their peers within the state and the MAC’s jurisdiction. CMS collects limited information on MACs’ provider education department efforts that focus on areas vulnerable to improper billing. CMS officials told us that they oversee the extent to which MACs’ provider education department efforts focus on areas vulnerable to improper billing by reviewing MACs’ IPRS reports. Although the IPRS reports focus mainly on how the medical review departments will address the areas identified as vulnerable to improper billing, CMS’s instructions to the MACs state that they should also include information on related provider education department activities or provider education department referrals. However, the IPRS reports we reviewed lacked specifics indicating how provider education department efforts focused on 74 percent of the 278 MAC-identified areas vulnerable to improper billing. We considered a provider education department effort to be specific if it included one or more of the following: the month, day, and year the event occurred or would occur; the type or number of providers attending; or a description of the event. As an example of a provider education department description that met our definition of ‘specific,’ one MAC reported its provider education department would conduct webinars focused on the top 5 to 10 denial reasons for oxygen equipment in the upcoming year. This MAC’s IPRS report in our analysis listed specific provider education department efforts for all areas vulnerable to improper billing. However, 74 percent of the areas vulnerable to improper billing listed in the 14 IPRS reports we reviewed lacked specifics—48 percent of the time the provider education department efforts listed were not specific and 26 percent of the time no provider education department efforts were included. As an example of a provider education department description that was not specific, one MAC reported that the medical review department would make provider referrals to its provider education department “as needed” for inpatient hospital and rehabilitation facilities admissions, but gave no additional detail (see fig. 3). According to CMS officials, they do not require IPRS reports to have a certain level of specificity regarding how provider education department efforts focus on areas vulnerable to improper billing because they do not want to be overly prescriptive regarding MACs’ provider education department efforts. As a result, CMS receives limited and varying degrees of information on the extent to which provider education department efforts are focused on the MAC-identified areas vulnerable to improper billing. CMS’s collection of limited information is inconsistent with federal internal control standards related to information and communications, which state that management should use quality information to achieve the entity’s objectives—CMS’s objective in this instance being the education of providers about proper billing. Unless CMS requires sufficient MAC provider education department reporting, it cannot ensure that MACs’ provider education department efforts are focused on areas vulnerable to improper billing. CMS does not require A/B MACs to educate referring providers on documentation requirements for ordering DME and home health services because referring providers do not bill for any DME or home health services on these orders. DME suppliers and home health agencies are responsible for submitting a proper written order from the referring provider to receive payment, and DME and HH+H MACs are required to educate DME suppliers and home health agencies—but not the referring provider—on a proper written order. However, when a DME supplier or home health agency accepts a written order, its payment may be denied if the claim is reviewed and the referring provider’s medical record documentation does not support the supply or service provided. See figure 4 for an example in the case of DME. Some MAC officials told us they have started working with other MACs voluntarily to provide education to referring providers regarding DME and home health services documentation requirements in some jurisdictions, although CMS has not specifically required this collaboration. As an example, officials from one DME MAC told us that they and three A/B MACs that operate within its jurisdiction co-hosted two webinars on documentation requirements when ordering durable medical equipment and prosthetics and orthotics in September 2015; these webinars focused on the medical records and orders that are part of the supplier requirement for documentation. However, this voluntary collaboration does not ensure that referring providers are always being educated. For example, two A/B MACs reported that they have done little collaboration with the HH+H MAC that serves their jurisdiction for referring providers on proper billing documentation for home health services. CMS officials stated that they have not explicitly required the MACs to work together on this activity because it has not risen to a level of significant concern. If education were provided, officials from two DME MACs told us there would still be a lack of incentive for referring providers to bill properly for DME and home health services because they do not experience any repercussions for insufficient documentation—one type of improper billing. Instead, when DME or home health claims are denied due to insufficient documentation, from either the supplier or the referring provider, the DME or home health provider loses the payment, while the referring provider does not. This education gap is problematic because insufficient documentation is the most common reason for improper payments for home health services and DME, which have high improper payment rates. As reported for fiscal year 2016, DME had a 46.3 percent improper payment rate with the Medicare program paying an estimated $3.7 billion improperly; home health services had a 42.0 percent improper payment rate with the program paying an estimated $7.7 billion improperly (see fig. 5.). Of these improper payment amounts, 81 percent and 96 percent were the result of insufficient documentation for DME and home health services, respectively. Although the DME improper payment rate has decreased somewhat in recent years, both the home health and DME programs’ improper payment rates remain higher than the overall Medicare fee-for- service improper payment rate of 11.0 percent. Because referring physicians do not receive education from MACs for the required documentation to support referrals for DME and home health services, the risk is increased that DME suppliers or home health agencies will improperly submit claims with insufficient documentation from referring providers. Although both the A/B and DME MAC contracts contain a requirement for the MACs to share ideas and coordinate their efforts as necessary, they do not explicitly require collaboration between these MACs to address this education gap for referring providers. The absence of a requirement for MACs to educate referring providers about proper documentation for DME and home health claims is inconsistent with federal internal control standards, which state that in order to achieve an entity’s objectives, management should assign responsibility, and delegate authority. Without explicitly requiring that MACs educate referring providers, the billing errors that result from referring providers’ insufficient documentation may persist. Although CMS officials consider the MACs’ short-stay hospital probe and educate reviews to be a success, they did not measure the effectiveness of this new strategy in reducing improper billing. CMS officials consider the reviews to be a success based on feedback from providers who were happy with the education they received and based on the reduction in the number of providers from the first to third rounds who were billing and documenting claims improperly. We found that the effectiveness of the MACs’ short-stay hospital probe and educate reviews cannot be confirmed because CMS did not establish performance metrics to determine whether the probe and educate reviews were effective in reducing improper billing. Although CMS stated the objective of the reviews was to determine the extent to which providers understood recent policy changes for certain services and were billing properly for those services, CMS officials told us they did not establish performance metrics that defined their objectives in measurable terms and would allow them to evaluate whether they met those objectives— for instance, specifying the percentage decrease they’d want to see in the number of providers reviewed from the first round to third rounds. This is inconsistent with federal internal control standards that specify management should define objectives in specific and measurable terms, establish appropriate performance measures for the defined objectives, and conduct ongoing monitoring to evaluate whether they are meeting those objectives. We reviewed the data provided by the MACs to CMS about the inpatient short-stay probe and educate reviews and found that the reviews may not have been a clear success. For instance, the percentage of providers who continued to require review remained high throughout the three rounds—over 90 percent. Additionally, the percentage of claims denied in each round also remained high throughout the three rounds (see table 1). CMS officials told us that because providers billing properly were removed after each round, they could not determine how much the overall denial rate effectively decreased from the first to third rounds, noting that the decrease in the claims denial rate could be greater than results indicate. However, the number of providers removed after each round was small. It is too early to say whether the home health probe and educate reviews are successful because only one round of reviews had been completed at the time of our review. CMS officials told us they have not established specific performance metrics for the home health reviews either. The probe and educate reviews are resource-intensive. Though their costs have not been quantified by CMS, the reviews require manual assessments of thousands of claims, as well as the offer of one-to-one education from the MACs to certain providers. The importance of measuring the effectiveness of these probe and educate reviews is highlighted by their resource-intensive nature, as well as by the fact that the percentage of providers requiring review and claims denied remained high throughout the three rounds of the probe and educate reviews of short inpatient hospital stays. Therefore, without performance metrics, CMS cannot determine whether future probe and educate reviews would be effective in reducing improper billing. The MACs’ provider education departments play an important role in reducing the rate of improper payments by educating Medicare providers on coverage and payment policies so that they can bill properly. However, CMS has missed opportunities to improve the effectiveness and its oversight of those efforts. CMS needs sufficient reporting from the MACs to determine if their provider education department efforts are focusing on areas vulnerable to improper billing. Lack of detail in the MACs’ IPRS reporting provides CMS with insufficient information for oversight. Without sufficient reporting, CMS cannot assure that the MACs are focusing their provider education department efforts on reducing areas vulnerable to improper billing. In order to reduce the high improper payment rates for home health and DME, education on proper documentation for providers who refer their patients for DME and home health services is necessary; however, MACs are not required to provide this education to the referring providers. To provide this education, collaboration is needed between the A/B MACs, which are the primary contacts for the referring providers, and the DME and HH+H MACs, which have expertise in the DME and home health billing areas. Without requiring MACs to work together to educate referring providers, CMS has little assurance that referring providers are being educated in order to help reduce improper billing in DME and home health services. Finally, CMS has not determined the effectiveness of the probe and educate reviews. CMS does not have sufficient information to indicate whether the reviews help to reduce improper billing; establishing performance metrics would help CMS determine if the reviews are effective in doing so. Without performance metrics, little assurance exists that the probe and educate reviews are effective in reducing improper billing and whether they should be used for additional areas vulnerable to improper billing in the future. To ensure MACs’ provider education efforts are focused on areas vulnerable to improper billing and to strengthen CMS’s oversight of those efforts, we recommend that CMS take the following three actions: 1. CMS should require sufficient detail in MAC reporting to allow CMS to determine the extent to which MACs’ provider education department efforts focus on areas identified as vulnerable to improper billing. 2. CMS should explicitly require that A/B, DME, and HH+H MACs work together to educate referring providers on documentation requirements for DME and home health services. 3. For any future probe and educate reviews, CMS should establish performance metrics that will help the agency determine the reviews’ effectiveness in reducing improper billing. We provided a draft of this product to HHS for comment. In its written comments, which are reprinted in appendix II, HHS concurred with our recommendations. HHS also provided technical comments, which we incorporated as appropriate. HHS acknowledged the role of referring providers in ensuring proper billing for Medicare services, stating it will ensure the MACs work together to educate referring providers on documentation requirements for DME and home health services. Further, HHS noted that it will work with the MACs on providing additional information related to their provider education department efforts. HHS also noted it is currently developing performance metrics to help measure the effectiveness of future probe and educate reviews. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of the Centers for Medicare & Medicaid Services, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-7114 or kingk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Kathleen M. King, (202) 512-7114 or kingk@gao.gov. In addition to the contact named above, Lori Achman, Assistant Director; Teresa Tam, Analyst-in-Charge; Cathleen Hamann; Deborah Linares; Vikki Porter, and Jennifer Whitworth made key contributions to this report.
For fiscal year 2016, HHS reported an estimated 11 percent improper payment rate and $41.1 billion in improper payments in the Medicare fee-for-service program. To help ensure payments are made properly, CMS contracts with MACs to conduct provider education efforts. CMS cites the MACs’ provider education department efforts as an important way to reduce improper payments. GAO was asked to examine MACs’ provider education department efforts and the results of MACs’ probe and educate reviews. This report examines (1) the focus of MACs’ provider education department efforts to help reduce improper billing and CMS oversight of these efforts and (2) the extent to which CMS measured the effectiveness of the MAC probe and educate reviews. GAO reviewed and analyzed CMS and MAC documents and MAC probe and educate review data for 2013-2016; interviewed CMS and MAC officials; and assessed CMS’s oversight activities against federal internal control standards Medicare administrative contractors (MAC) process Medicare claims, identify areas vulnerable to improper billing, and develop general education efforts focused on these areas. MAC officials state that their provider education departments focus their educational efforts on areas vulnerable to improper billing; however, the Centers for Medicare & Medicaid Services' (CMS)--the agency within the Department of Health and Human Services (HHS) that administers Medicare--oversight and requirements for these efforts are limited. CMS collects limited information about how these efforts focus on the areas MACs identify as vulnerable to improper billing. According to CMS officials, the agency has not required the MACs to provide specifics on their provider education department efforts in these reports because it does not want to be overly prescriptive regarding MAC provider education department efforts. Federal internal control standards state that management should use quality reporting information to achieve the entity's objectives. Unless CMS requires sufficient MAC provider education department reporting, it cannot ensure that the departments' efforts are focused on areas vulnerable to improper billing. CMS does not require MACs to educate providers who refer patients for durable medical equipment (DME), including prosthetics, orthotics, and supplies, and home health services on proper billing documentation, nor does it explicitly require MACs to work together to provide this education. HHS has reported that a large portion of the high improper payment rates in these services is related to insufficient documentation. The absence of a requirement for MACs to educate referring providers about proper documentation for DME and home health claims is inconsistent with federal internal control standards, which state that in order to achieve an entity's objectives, management should assign responsibility and delegate authority. Without an explicit requirement from CMS to educate these referring providers, billing errors due to insufficient documentation may persist. Short-stay hospital and home health claims have been the focus of the MACs' probe and educate reviews--a CMS strategy to help providers improve billing in certain areas vulnerable to improper billing. Under the probe and educate reviews, MACs review a sample of claims from every provider and then offer individualized education to reduce billing errors. CMS officials consider the completed short-stay hospital reviews to be a success based on anecdotal feedback from providers. However, the effectiveness of these reviews cannot be confirmed because CMS did not establish performance metrics to determine whether the reviews were effective in reducing improper billing. Furthermore, GAO found the percentage of claims remained high throughout the three rounds of the review process, despite the offer of education after each round. Federal internal control standards state that management should define objectives in specific and measurable terms and evaluate results against those objectives. Without performance metrics, CMS cannot determine whether future probe and educate reviews would be effective in reducing improper billing. GAO recommends that CMS should (1) require sufficient detail in MAC reporting to determine the extent to which MACs' provider education department efforts focus on vulnerable areas, (2) explicitly require MACs to work together to educate referring providers on proper documentation for DME and home health services, and (3) establish performance metrics for future probe and educate reviews. HHS concurred with GAO's recommendations.
Financial assistance to help students and families pay for postsecondary education has been provided for many years through student grant and loan programs authorized under Title IV of the Higher Education Act of 1965, as amended. Examples of these programs include Pell Grants for low-income students, PLUS loans to parents and graduate students, and Stafford loans.Much of this aid has been provided on the basis of the difference between a student’s cost of attendance and an estimate of the ability of the student and the student’s family to pay these costs, called the expected family contribution (EFC). The EFC is calculated based on information provided by students and parents on the Free Application for Federal Student Aid (FAFSA). Federal law establishes the criteria that students must meet to be considered independent of their parents for the purpose of financial aid and the share of family and student income and assets that are expected to be available for the student’s education.In fiscal year 2007, the Department of Education made available approximately $15 billion in grants and another $65 billion in Title IV loan assistance. Title IV also authorizes programs funded by the federal government and administered by participating higher education institutions, including the Supplemental Educational Opportunity Grant (SEOG), Perkins loans, and federal work-study aid, collectively known as campus-based aid. Table 1 provides brief descriptions of the Title IV programs that we reviewed in our 2005 report and includes two programs—Academic Competitiveness Grants and National Science and Mathematics Access to Retain Talent Grants—that were created since that report was issued. Postsecondary assistance also has been provided through a range of tax preferences,including postsecondary tax credits, tax deductions, and tax- exempt savings programs. For example, the Taxpayer Relief Act of 1997 allows eligible tax filers to reduce their tax liability by receiving, for tax year 2007, up to a $1,650 Hope tax credit or up to a $2,000 Lifetime Learning tax credit for tuition and qualified related expenses paid for a single student. According to the Office of Management and Budget, the fiscal year 2007 federal revenue loss estimate of the postsecondary tax preferences that we reviewed was $8.7 billion. Tax preferences discussed as part of our 2005 report and December 2006 testimony include the following: Lifetime Learning Credit—income-based tax credit claimed by tax filers on behalf of students enrolled in one or more postsecondary education courses. Hope Credit—income-based tax credit claimed by tax filers on behalf of students enrolled at least half-time in an eligible program of study and who are in their first 2 years of postsecondary education. Student Loan Interest Deduction—income-based tax deduction claimed by tax filers on behalf of students who took out qualified student loans while enrolled at least half-time. Tuition and Fees Deduction—income-based tax deduction claimed by tax filers on behalf of students who are enrolled in one or more postsecondary education courses and have either a high school diploma or a General Educational Development (GED) credential. Section 529 Qualified Tuition Programs—College Savings Programs and Prepaid Tuition Programs—non-income-based programs that provide favorable tax treatment to investments and distributions used to pay the expenses of future or current postsecondary students. Coverdell Education Savings Accounts—income-based savings program providing favorable tax treatment to investments and distributions used to pay the expenses of future or current elementary, secondary, or postsecondary students. As figure 1 demonstrates, the use of tax preferences has increased since 1997, both in absolute terms and relative to the use of Title IV aid. Postsecondary student financial assistance provided through programs authorized under Title IV of the Higher Education Act and the tax code differ in timing of assistance, the populations that receive assistance, and the responsibility of students and families to obtain and use the assistance. Title IV programs and education-related tax preferences differ significantly in when eligibility is established and in the timing of the assistance they provide. Title IV programs generally provide benefits to students while they are in school. Education-related tax preferences, on the other hand, (1) encourage saving for college through tax-exempt saving, (2) assist enrolled students and their families in meeting the current costs of postsecondary education through credits and tuition deductions, and (3) assist students and families repaying the costs of past postsecondary education through a tax deduction for student loan interest paid. While Title IV programs and tax preferences assist many students and families, program and tax rules affect eligibility for such assistance. These rules also affect the distribution of Title IV aid and the assistance provided through tax preferences. As a result, the beneficiaries of Title IV programs and tax preferences differ. Title IV programs generally have rules for calculating grant and loan assistance that give consideration to family and student income, assets, and college costs in the awarding of financial aid.For example, Pell Grant awards are calculated by subtracting the student’s EFC from the maximum Pell Grant award ($4,310 in academic year 2007—2008) or the student’s cost of attendance, whichever is less. Because the EFC is closely linked to family income and circumstances (such as the size of the family and the number of dependents in school), and modest EFCs are required for Pell Grant eligibility, Pell awards are made primarily to families with modest incomes. In contrast, the maximum unsubsidized Stafford loan amount is calculated without direct consideration of financial need: students may borrow up to their cost of attendance, minus the estimated financial assistance they will receive.As table 2 shows, 92 percent of Pell financial support in 2003—2004 was provided to dependent students whose family incomes were $40,000 or below, and the 38 percent of Pell recipients in the lowest income category ($20,000 or below) received a higher share (48 percent) of Pell financial support. Because independent students generally have lower incomes and accumulated savings than dependent students and their families, patterns of program participation and dollar distribution differ. Participation of independent students in Pell, subsidized Stafford, and unsubsidized Stafford loan programs is heavily concentrated among those with incomes of $40,000 or less: from 74 percent (unsubsidized Stafford) to 95 percent (Pell) of program participants have incomes below this level. As shown in table 3, the distribution of award dollars follows a nearly identical pattern. Many education-related tax preferences have both de facto lower limits created by the need to have a positive tax liability to obtain their benefit and income ceilings on who may use them. For example, the Hope and Lifetime Learning tax credits require that tax filers have a positive tax liability to use them, and income-related phase-out provisions in 2007 began at $47,000 and $94,000 for single and joint filers, respectively. Furthermore, tax-exempt savings are more advantageous to families with higher incomes and tax liabilities because, among other reasons, these families hold greater assets to invest in these tax preferences and have a higher marginal tax rate, and thus benefit the most from the use of these tax preferences. Table 4 shows the income categories of tax filers claiming the three tax preferences available to current students or their families, along with the reduced tax liabilities from those preferences in 2005. The federal government and postsecondary institutions have significant responsibilities in assisting students and families in obtaining assistance provided under Title IV programs but only minor roles with respect to tax filers’ use of education-related tax preferences. To obtain federal student aid, applicants must first complete the FAFSA, a form that requires students to complete up to 99 fields for the 2007—2008 academic year. Submitting a completed FAFSA to the Department of Education largely concludes students’ and families’ responsibility in obtaining aid. The Department of Education is responsible for calculating students’ and families’ EFC on the basis of the FAFSA, and students’ educational institutions are responsible for determining aid eligibility and the amounts and packaging of awards. In contrast, higher education tax preferences require students and families to take more responsibility. Although postsecondary institutions provide students and the Internal Revenue Service (IRS) with information about higher education attendance, they have no other responsibilities for higher education tax credits, deductions, or tax-preferred savings. The federal government’s primary role with respect to higher education tax preferences is the promulgation of rules; the provision of guidance to tax filers; and the processing of tax returns, including some checks on the accuracy of items reported on those tax returns. The responsibility for selecting among and properly using tax preferences rests with tax filers. Unlike Title IV programs, users must understand the rules, identify applicable tax preferences, understand how these tax preferences interact with one another and with federal student aid, keep records sufficient to support their tax filing, and correctly claim the credit or deduction on their return. According to our analysis of 2005 IRS data on the use of Hope and Lifetime Learning Credits and the tuition deduction, some tax filers appear to make less-than-optimal choices among them. The apparent suboptimal use of postsecondary tax preferences may arise, in part, from the complexity of these provisions. Making poor choices among tax preferences for postsecondary education may be costly to tax filers. For example, families may strand assets in a tax-exempt savings vehicle and incur tax penalties on their distribution if their child chooses not to go to college. They may also fail to minimize their federal income tax liability by claiming a tax credit or deduction that yields less of a reduction in taxes than a different tax preference or by failing to claim any of their available tax preferences. For example, if a married couple filing jointly with one dependent in his/her first 2 years of college had an adjusted gross income of $50,000, qualified expenses of $10,000 in 2007, and tax liability greater than $2,000, their tax liability would be reduced by $2,000 if they claimed the Lifetime Learning Credit but only $1,650 if they claimed the Hope Credit. In our analysis of 2005 IRS data for returns with information on education expenses incurred, we found that some people who appear to be eligible for tax credits or the tuition deduction did not claim them. We estimate that 2.1 million filers could have claimed a tax credit or tuition deduction and thereby reduced their taxes. However, about 19 percent of those filers, representing about 412,000 returns, failed to claim any of them. The amount by which these tax filers failed to reduce their tax averaged $219; 10 percent of this group could have reduced their tax liability by over $500. In total, including both those who failed to claim a tax credit or tuition deduction and those who chose a credit or a deduction that did not maximize their benefit, we found that in 2005, 28 percent, or nearly 601,000 tax filers did not maximize their potential tax benefit. Regarding those making a poor choice among the provisions, for example, 27 percent of tax filers that claimed the tuition deduction could have further reduced their tax liability by an average of $220 by instead claiming the Lifetime Learning Credit; 10 percent of this group could have reduced their tax liabilities by over $630. Tax filers that claimed the Hope Credit when the Lifetime Learning Credit was a more optimal choice failed to reduce their tax liabilities by an average of $356. Suboptimal choices were not limited to tax filers who prepared their own tax returns. A possible indicator of the difficulty people face in understanding education-related tax preferences is how often the suboptimal choices we identified were found on tax returns prepared by paid tax preparers. We estimate that 50 percent of the returns we found that appear to have failed to optimally reduce the tax filer’s tax liability were prepared by paid tax preparers. Generalized to the population of tax returns we were able to review, returns prepared by paid tax preparers represent about 301,000 of the approximately 601,000 suboptimal choices we found. Our April 2006 study of paid tax preparers corroborates the problem of confusion over which of the tax preferences to claim. Of the nine undercover investigation visits we made to paid preparers with a taxpayer with a dependent college student, three preparers did not claim the credit most advantageous to the taxpayer and thereby cost these taxpayers hundreds of dollars in refunds. In our investigative scenario, the expenses and the year in school made the Hope education credit far more advantageous to the taxpayer than either the tuition and fees deduction or the Lifetime Learning credit. The apparently suboptimal use of postsecondary tax preferences may arise, in part, because of the complexity of using these provisions. Tax policy analysts have frequently identified postsecondary tax preferences as a set of tax provisions that demand a particularly large investment of knowledge and skill on the part of students and families or expert assistance purchased by those with the means to do so. They suggest that this complexity arises from multiple postsecondary tax preferences with similar purposes, from key definitions that vary across these provisions, and from rules that coordinate the use of multiple tax provisions. Twelve tax preferences are outlined in IRS Publication 970, Tax Benefits for Education: For Use in Preparing 2007 Returns. The publication includes four different tax preferences for educational saving. Three of these preferences—Coverdell Education Savings Accounts, Qualified Tuition Programs, and U.S. education savings bonds—differ across more than a dozen dimensions, including the tax penalty that occurs when account balances are not used for qualified higher education expenses, who may be an eligible beneficiary, annual contribution limits, and other features. In addition to learning about, comparing, and selecting tax preferences, filers who wish to make optimal use of multiple tax preferences must understand how the use of one tax preference affects the use of others. The use of multiple education-related tax preferences is coordinated through rules that prohibit the application of the same qualified higher education expenses for the same student to more than one education- related tax preference, sometimes referred to as “anti-double-dipping rules.” These rules are important because they prevent tax filers from underreporting their tax liability. Nonetheless, anti-double-dipping rules are potentially difficult for tax filers to understand and apply, and misunderstanding them may have consequences for a filer’s tax liability. Many researchers and policy analysts support simplifying the existing federal grant, loans and tax preferences in the belief that doing so would have a net benefit on encouraging access. Indeed, suggestions put forth in recent years to combine the federal grants and tax credits, for example, may help address some of the challenges we identified in recent years regarding tax filers’ suboptimal use of postsecondary tax preferences or the confusion created by the interactions between direct student aid programs, such as the Pell Grant, and existing tax preferences. In this case, reducing the number of choices students and their families have to make would likely reduce tax filers’ confusion and mistakes. To date, we have not undertaken any studies of how current Title IV student aid programs or tax preferences could be simplified and, as a result, have not developed any such models or proposals. However, while different aspects of simplification may provide students and their families with various benefits, Congress would likely want to weigh those benefits against a number of potentially related costs. Simplifying the federal application for student aid—A better understanding is needed about whether or to what extent simplifying the application for federal aid would: (1) alter the administration of other federal, state and institutional student aid programs, (2) be capable of accommodating future federal policies designed to target aid, and (3) affect current programs that are specifically tied to Pell Grant eligibility. The current FAFSA is used to determine students’ eligibility for various federal aid programs, including Pell Grants, Academic Competitiveness Grants, SMART Grants, Stafford and PLUS loans, Supplemental Educational Opportunity Grants (SEOG), Perkins Loans, and Federal Work-Study. In addition, many states and schools rely on the FAFSA when awarding state and institutional student aid. To the extent that other programs require FAFSA-like information from applicants to award financial aid, additional research is needed to determine whether simplifying the FAFSA may actually increase the number of applications students and families would be required to submit. Simplifying eligibility verification requirements—Both grants and tax credits are awarded based, in part, on students’ and their families’ incomes, which means students and families are required to document their income to receive the benefit. Under the current system, some students and families are eligible to apply for Title IV student aid even though they are not required to file a tax return; in such cases, eligibility is computed based upon information reported on the FAFSA. Any plan to consolidate some or all of the current federal grants and tax preferences would need to consider how to minimize burden on students and families while also controlling federal administrative costs, for example, by minimizing the use of multiple verification procedures that use multiple forms of documentation and that are administered by multiple agencies. Simplifying program administration while maintaining federal cost controls —Federal grant and loan programs are administered by the Department of Education while federal tax preferences are administered by IRS. Under a system where existing grant aid and tax credits are consolidated, it is unclear without additional research, whether cost efficiency is better achieved through having the Department of Education or IRS assume federal budgeting and accounting responsibilities. In addition, the grant programs generally are subject to an annual appropriation which enables Congress to control overall federal expenditures by taking into account other federal priorities. In contrast, most tax preferences are like entitlement programs and their revenue losses can only be controlled by changing the statutory qualifications for the tax preference. Simplifying aid distribution—Policymakers will need to consider costs associated with the federal government recovering funds if students fail to maintain eligibility requirements over the course of an academic year. Families currently claim tax preferences after qualifying higher education expenses have been incurred but receive federal grant benefits to pay current expenses. Program simplifications that consolidate grants and tax preferences into a benefit paid before expenses are incurred likely will require the implementation of new cost recovery mechanisms or other means to allocate payments based on costs actually incurred. Simplifying eligible expenses—Room and board expenses are considered in the administration of the federal student aid programs authorized under Title IV of the Higher Education Act but not in all tax preferences, particular the Hope and Lifetime Learning Credits. Careful analysis will be needed of how such expenses could be accounted for in a simplified scheme if it is changed to being structured as a tax preference rather than a grant. Room and board expenses vary based on where a school is located or whether a student lives on or off campus, and they can be a significant component of a student’s cost of attendance, particularly at community colleges. While certain strategies might be employed to lessen tax filers’ recordkeeping requirements and result in fewer tax filer compliance issues, further research is needed on how such an allowance would be optimally set. Establishing too high an allowance, for example, could result in some students receiving a benefit in excess of the costs they incur for room and board, especially for those students who choose to live with their parents. Alternatively, if tax assistance is provided in advance of incurring costs, but the assistance is to be limited to costs actually incurred, a cost recovery or other administrative mechanism would be needed as discussed above. Little is known about the effectiveness of federal grant and loan programs and education-related tax preferences in promoting attendance, choice, and the likelihood that students either earn a degree or continue their education (referred to as persistence). Many federal aid programs and tax preferences have not been studied, and for those that have been studied, important aspects of their effectiveness remain unexamined. In our 2005 report, we found no research on any aspect of effectiveness for several major Title IV federal postsecondary programs and tax preferences. For example, no research had examined the effects of federal postsecondary education tax credits on students’ persistence in their studies or on the type of postsecondary institution they choose to attend, and there is limited research on the effectiveness of the Pell Grant program on students’ persistence. One recently published study suggests that complexity in the federal grant and loan application processes may undermine its efficacy in promoting postsecondary attendance. The relative newness of most of the tax preferences also presents challenges because relevant data are just now becoming available. These factors may contribute to a lack of information concerning the effectiveness of the aid programs and tax preferences. GAO, Student Aid and Tax Benefits: Better Research and Guidance Will Facilitate Comparison of Effectiveness and Student Use, GAO-02-751 (Washington, D.C.: Sept. 13, 2002). in, or completion of postsecondary education.” Multiyear projects funded under this subtopic began in July 2007. However, none of the grants awarded to date appear to directly evaluate the role and effectiveness of Title IV programs and tax preferences in improving access to, persistence in, or completion of postsecondary education. As we noted in our 2002 report, more research into the effectiveness of different forms of postsecondary education assistance is important.Without such information federal policymakers cannot make fact-based decisions about how to build on successful programs and make necessary changes to improve less-effective programs. The budget deficit and other major fiscal challenges facing the nation necessitate rethinking the base of existing federal spending and tax programs, policies, and activities by reviewing their results and testing their continued relevance and relative priority for a changing society. In light of the long-term fiscal challenge this nation faces and the need to make hard decisions about how the federal government allocates resources, this hearing provides an opportunity to continue a discussion about how the federal government can best help students and their families pay for postsecondary education. Some questions that Congress should consider during this dialog include the following: Should the federal government consolidate postsecondary education tax provisions to make them easier for the public to use and understand? Given its limited resources, should the government further target Title IV programs and tax provisions based on need or other factors? How can Congress best evaluate the effectiveness and efficiency of postsecondary education aid provided through the tax code? Can tax preferences and Title IV programs be better coordinated to maximize their effectiveness? Mr. Chairman and Members of the Subcommittee, this concludes our statement. We welcome any questions you have at this time. For further information regarding this testimony, please contact Michael Brostek at (202) 512-9110 or brostekm@gao.gov or George Scott at (202) 512-7215 or scottg@gao.gov. Individuals making contributions to this testimony include David Lewis, Assistant Director; Sarah Farkas, Sheila R. McCoy, John Mingus, Danielle Novak, Daniel Novillo, Carlo Salerno, Andrew J. Stephens, and Jessica Thomsen. The federal government helps students and families save, pay for, and repay the costs of postsecondary education through grant and loan programs authorized under Title IV of the Higher Education Act of 1965, as amended, and through tax preferences—reductions in federal tax liabilities that result from preferential provisions in the tax code, such as exemptions and exclusions from taxation, deductions, credits, deferrals, and preferential tax rates. Assistance provided under Title IV programs include Pell Grants for low- income students, the Academic Competitiveness and National Science and Mathematics Access to Retain Talent Grants, PLUS loans, which parents as well as graduate and professional students may apply for, and Stafford loans. While each of the three grants reduces the price paid by the student, student loans help to finance the remaining costs and are to be repaid according to varying terms. Stafford loans may be either subsidized or unsubsidized. The federal government pays the interest cost on subsidized loans while the student is in school, and during a 6-month period known as the grace period, after the student leaves school. For unsubsidized loans, students are responsible for all interest costs. Stafford and PLUS loans are provided to students through both the Federal Family Education Loan program (FFEL) and the William D. Ford Direct Loan Program (FDLP). The federal government’s role in financing and administering these two loan programs differs significantly. Under the FFEL program, private lenders, such as banks, provide loan capital and make loans, and the federal government guarantees FFEL lenders a minimum yield on the loans they make and repayment if borrowers default. Under FDLP, the federal government makes loans to students using federal funds. The Department of Education and its private-sector contractors jointly administer the program. Title IV also authorizes programs funded by the federal government and administered by participating higher education institutions, including the Supplemental Educational Opportunity Grant (SEOG), Perkins loans, and federal work-study aid, collectively known as campus-based aid. To receive Title IV aid, students (along with parents, in the case of dependent students) must complete a Free Application for Federal Student Aid form. Information from the FAFSA, particularly income and asset information, is used to determine the amount of money—called the expected family contribution—that the student and/or family is expected to contribute to the student’s education. Federal law establishes the criteria that students must meet to be considered independent of their parents for the purpose of financial aid and the share of family and student income and assets that are expected to be available for the student’s education. Once the EFC is established, it is compared with the cost of attendance at the institution chosen by the student. The cost of attendance comprises tuition and fees; room and board; books and supplies; transportation; certain miscellaneous personal expenses; and, for some students, additional expenses.If the EFC is greater than the cost of attendance, the student is not considered to have financial need, according to the federal aid methodology. If the cost of attendance is greater than the EFC, then the student is considered to have financial need. Title IV assistance that is made on the basis of the calculated need of aid applicants is called need-based aid. Key characteristics of Title IV programs are summarized in table 5 below. Prior to the 1990s, virtually all major federal initiatives to assist students with the costs of postsecondary education were provided through grant and loan programs authorized under Title IV of the Higher Education Act. Since the 1990s, however, new federal initiatives to assist families and students in paying for postsecondary education have largely been implemented through the federal tax code. The federal tax code now contains a range of tax preferences that may be used to assist students and families in saving for, paying, or repaying the costs of postsecondary education. These tax preferences include credits and deductions, both of which allow tax filers to use qualified higher education expenses to reduce their federal income tax liability. The tax credits reduce the tax filers’ income tax liability on a dollar-for-dollar basis but are not refundable. Tax deductions permit qualified higher education expenses to be subtracted from income that would otherwise be taxable. To benefit from a higher education tax credit or tuition deduction, a tax filer must use tax form 1040 or 1040A, have an adjusted gross income below the provisions’ statutorily specified income limits, and have a positive tax liability after other deductions and credits are calculated, among other requirements. Tax preferences also include tax-exempt savings vehicles. Section 529 of the tax code makes tax free the investment income from qualified tuition programs. There are two types of qualified tuition programs: savings programs established by states and prepaid tuition programs established either by states or by one or more eligible educational institutions. Another tax-exempt savings vehicle is the Coverdell Education Savings Account. Tax penalties apply to both 529 programs and Coverdell savings accounts if the funds are not used for allowable education expenses. Key features of these and other education-related tax preferences are described below, in table 6. Our review of tax preferences did not include exclusions from income, which permit certain types of education-related income to be excluded from the calculation of adjusted gross income on which taxes are based. For example, qualified scholarships covering tuition and fees and qualified tuition reductions from eligible educational institutions are not included in gross income for income tax purposes. Similarly, student loans forgiven when a graduate goes into certain professions for a certain period of time are also not subject to federal income taxes. We did not include special provisions in the tax code that also extend existing tax preferences when tax filers support a postsecondary education student. For example, tax filers may claim postsecondary education students as dependents after age 18, even if the student has his or her own income over the limit that would otherwise apply. Also, gift taxes do not apply to funds used for certain postsecondary educational expenses, even for amounts in excess of the usual $12,000 limit on non-taxable gifts. In addition, funds withdrawn early from an Individual Retirement Account are not subject to the usual 10 percent penalty when used for either a tax filer’s or his or her dependent’s postsecondary educational expenses. For an example of how the use of college savings programs and the tuition deduction is affected by “anti-double-dipping” rules, consider the following: To calculate whether a distribution from a college savings program is taxable, tax filers must determine if the total distributions for the tax year are more or less than the total qualified educational expenses reduced by any tax-free educational assistance, i.e., their adjusted qualified education expenses (AQEE). After subtracting tax-free assistance from qualified educational expenses to arrive at the AQEE, tax filers multiply total distributed earnings by the fraction (AQEE / total amount distributed during the year). If parents of a dependent student paid $6,500 in qualified education expenses from a $3,000 tax-free scholarship and a $3,600 distribution from a tuition savings program, they would have $3,500 in AQEE. If $1,200 of the distribution consisted of earnings, then $1,200 x ($3,500 AQEE / $3,600 distribution) would result in $1,167 of the earnings being tax free, while $33 would be taxable. However, if the same tax filer had also claimed a tuition deduction, anti-double-dipping rules would require the tax filer to subtract the expenses taken into account in figuring the tuition deduction from AQEE. If $2,000 in expenses had been used toward the tuition deduction, then the taxable distribution from the section 529 savings program would rise to $700.For families such as these, anti-double-dipping rules increase the computational complexity they face and may result in unanticipated tax liabilities associated with the use of section 529 savings programs. We used two data sets for this testimony: Education’s 2003-2004 National Postsecondary Student Aid Study and the Internal Revenue Service’s 2005 Statistics of Income. Estimates from both data sets are subject to sampling errors and the estimates we report are surrounded by a 95 percent confidence interval. The following tables provide the lower and upper bounds of the 95 percent confidence interval for all estimate figures in the tables in this testimony. For figures and text drawn from these data, we provide both point estimates and confidence intervals.
Federal assistance helps students and families pay for postsecondary education through several policy tools--grant and loan programs authorized by Title IV of the Higher Education Act of 1965 and more recently enacted tax preferences. This testimony summarizes our 2005 report and provides updates on (1) how Title IV assistance compares to that provided through the tax code (2) the extent to which tax filers effectively use education tax preferences, (3) potential benefits and costs of simplifying federal student aid, and (4) what is known about the effectiveness of federal assistance. This hearing is an opportunity to consider whether changes should be made in the government's overall strategy for providing such assistance or to the individual programs and tax provisions that provide the assistance. This statement is based on updates to previously published GAO work and reviews of relevant literature. Title IV student aid and tax preferences provide assistance to a wide range of students and families in different ways. While both help students meet current expenses, tax preferences also assist students and families with saving for and repaying postsecondary costs. Both serve students and families with a range of incomes, but some forms of Title IV aid--grant aid, in particular--provide assistance to those whose incomes are lower, on average, than is the case with tax preferences. Tax preferences require more responsibility on the part of students and families than Title IV aid because taxpayers must identify applicable tax preferences, understand complex rules concerning their use, and correctly calculate and claim credits or deductions. While the tax preferences are a newer policy tool, the number of tax filers using them has grown quickly, surpassing the number of students aided under Title IV in 2002. Some tax filers do not appear to make optimal education-related tax decisions. For example, our analysis of a limited number of 2005 tax returns indicated that 41 percent of eligible tax filers did not claim either the tuition deduction or a tax credit. In so doing, these tax filers failed to reduce their tax liability by $219, on average, and 10 percent of these filers could have reduced their tax liability by over $500. One explanation for these taxpayers' choices may be the complexity of postsecondary tax provisions, which experts have commonly identified as difficult for tax filers to use. Simplifying the grants, loans, and tax preferences may reduce complexities in higher education financing, including reducing the number of eligible tax filers that do not claim tax preferences, but more research would be necessary to understand the full benefits and costs of any such changes. Little is known about the effectiveness of Title IV aid or tax preferences in promoting, for example, postsecondary attendance or school choice, in part because of research data and methodological challenges. As a result, policymakers do not have information that would allow them to make the most efficient use of limited federal resources to help students and families.
To accomplish its mission, HUD administers community and housing programs that benefit millions of households each year. Among other things, the department provides affordable rental housing opportunities and helps homeless families and chronically homeless individuals and veterans. The department also administers mortgage insurance programs for single-family housing, multifamily housing, and health care. HUD relies on five main organizational components to carry out its mission. Of these, two components have lead responsibility for improving access to housing and are the business owners for related IT modernization efforts: Housing/Federal Housing Administration (FHA): Programs within this office are responsible for contributing to building healthy communities, maintaining and expanding housing opportunities, and stabilizing credit markets in times of economic disruption. This office also regulates certain aspects of the housing industry. For example, the department currently reports that it provides insurance on loans made by its approved lenders for 4.8 million single-family mortgages and 13,000 multifamily projects, including manufactured homes and hospitals. The FHA Transformation modernization effort is managed within this office. Public and Indian Housing: Programs within this office are responsible for creating opportunities for residents’ self-sufficiency and economic independence. Toward this end, this office currently oversees a housing choice voucher program to subsidize housing for approximately 2.1 million low-income, elderly, and disabled families; a public housing program that subsidizes about 1.3 million housing units for vulnerable low-income families; and block grants and guarantee programs for Native American groups. The NGMS modernization effort is managed within this office. In addition, to support these organizational components, the department relies on various administrative offices to provide guidance and tools. These include the department’s Office of the Chief Information Officer (OCIO) and the Office of the Chief Procurement Officer (OCPO). Through coordination with the organizational components, OCIO manages IT resources and provides support for the department’s infrastructure, security, and ongoing projects. This office also provides project management guidance and technical expertise to modernization efforts. For its part, OCPO is responsible for obtaining contracted goods and services required by the department to meet its strategic objectives. This office is involved with initiating acquisition actions upon request by the organizational components. Further, HUD’s Deputy Secretary is responsible for managing the department’s daily operations, annual operating budget, and approximately 8,900 employees. As part of this role, the Deputy Secretary conducts biweekly meetings with stakeholders to discuss the Secretary’s priorities. During these meetings, the scope, milestones, risks, and status of action items related to priority issues are discussed. The FHA Transformation and NGMS modernization efforts are designated as priority and each has its own biweekly meeting. A simplified view of the department’s housing organization structure and the offices responsible for FHA Transformation and NGMS is provided in figure 1. According to the fiscal year 2014 President’s Budget request for HUD, $285.1 million is expected to be spent on IT investments. HUD’s IT environment consists of multiple systems that, among other things, are intended to help the department coordinate with lending institutions to insure mortgages, collect and manage state and local housing data, process applications for community development, and issue vouchers that provide access to subsidized housing. In particular, the department’s housing programs rely on systems for processing and managing these business operations. For example, systems within the Office of Housing are expected to process mortgage insurance applications, bill and collect premiums, pay claims, manage receivables and other assets, track delinquencies and defaults, and support staff in providing counseling to first-time home buyers and existing homeowners. Additionally, Public and Indian Housing programs that use systems are intended to process vouchers for different rental assistance programs, as well as to support the processing of applications for, and the management of, more than 50 grant programs administered by the department. However, HUD’s current IT environment has not effectively supported its business operations because its systems are overlapping and duplicative, not integrated, necessitate manual workloads, and employ antiquated technologies that are costly to maintain. For example, the department reported from 2008 to 2012 that its IT environment consisted of: Over 200 information systems, many of which perform the same function and, thus, are overlapping and duplicative. Specifically, different systems perform the same task to separately support grants management, loan processing, and subsidies management. Stovepiped, nonintegrated systems that result in identical data existing in multiple systems. For example, two organizational components store about 80 percent of similar data in separate databases that provide information on rental assistance participants. Manual processing for business functions due to a lack of systems to support these processes. For example, specific NGMS projects are intended to replace existing ad hoc analyses performed in spreadsheets and databases with systems that automate and standardize those functions. Antiquated technology (15- to 30- years old) and complex systems that are costly to maintain. For example, the department relies on different programming languages and operating systems, which requires specialized skills to operate and maintain. Additionally, HUD engaged contractors to conduct an assessment of the department’s environment. This assessment, issued in January 2011, concluded that unclear reporting relationships hindered the enforcement of IT policies; contractor performance information was not used to inform management decisions; technical standards were lacking or not enforced; and data management practices did not support business needs. Through the Transformation Initiative’s IT component, HUD has begun addressing challenges to its environment and modernizing its systems. In this regard, the department initiated seven IT modernization efforts, of which FHA Transformation and NGMS are the two largest. For fiscal years 2010 and 2011, the department reported that the Transformation Initiative funding made available for FHA Transformation and NGMS was $58.5 and $41.1 million, respectively. (See later discussion in this report regarding costs associated with the 14 projects in our study.) FHA Transformation was initiated to improve the department’s management of insurance programs through the development and implementation of a modern financial services IT environment that is expected to improve loan endorsement processes, collateral risk capabilities, and fraud prevention. In August 2009, HUD published the FHA Office of Housing Information Technology Strategy and Improvement Plan, which identified and prioritized 25 IT areas with performance gaps for its single-family housing, multifamily housing development and rental assistance, health care facilities programs, and enterprise applications. In May 2010, FHA Transformation began planning and executing modernization efforts aimed at addressing the gaps identified in the plan. Specifically, the modernization initiative is intended to implement technology within the following four functional areas aimed at addressing changes in FHA’s business model, operating environment, and components of the loan life cycle: Infrastructure and legacy migration: Provide a scalable infrastructure to support rules engines, analytics, and reporting systems, as well as a mechanism for transferring legacy applications to the new platform. Specifically, the Federal Financial Services Platform project is intended to provide hardware and standard software to support case management and migration of legacy applications (e.g., the Computerized Homes Underwriting Reporting System) for all lines of business. Borrower/collateral risk management and fraud monitoring: Provide tools for analyzing, monitoring, and managing emerging issues and trends in the housing market, including borrower and collateral risk, appraisals, and fraud, as it relates to the FHA portfolio. For example, the Legacy Application Transformation project is expected to implement a software service tool that aggregates data to identify emerging issues and trends in borrower risk and fraud by analyzing the accuracy and validity of verified assets, income, and employment on individual loans. Other projects within this functional area include business process reengineering and a pilot designed to automate and streamline the multifamily housing underwriting process. Using the new infrastructure, an automated underwriting tool is expected to be deployed to expand the capabilities for processing loan applications for insurance programs and replace current systems (e.g., the Development Application Processing System). Counterparty management: Provide applications for improved performance and compliance of lenders and appraisers through more proactively identifying risk trends and improving loan file review techniques. Specific projects include the Lender Electronic Assessment Portal, a web-based automated delivery of electronic applications and storage of lender application data that assists with reviewing new lender applications and requests for annual recertification to participate in FHA programs. Future plans call for replacing seven legacy applications. Portfolio analysis: Provide tools intended to augment risk monitoring and management; enhance predictive analytics; provide timely and flexible reporting; and deliver more accurate, detailed information to decision makers. For example, the Portfolio Risk Reporting & Analytics project is intended to provide a web-based software service tool for modeling FHA program risks. While initial use of the software is to include receiving hard-copy reports from the third-party vendor, HUD also expects to deploy the tool within the department’s infrastructure in order for FHA employees to have access to reports and the analytics dashboard data electronically. Overall leadership for FHA is provided by the Assistant Secretary for Housing/Federal Housing Commissioner, who chairs the modernization effort steering committee; the General Deputy Assistant Secretary for Housing; and the Director for the Office of Program Systems Management, who is the executive sponsor. The modernization effort also has a project management office that is responsible for executing and managing the associated projects. As of April 2013, FHA Transformation consisted of 10 projects, 9 of which were included in our study. Table 1 summarizes the 9 FHA Transformation projects that we assessed as part of our study. The NGMS modernization effort is intended to provide an integrated system with a seamless view of financial and program data currently warehoused in disparate data sources and a new set of monitoring, oversight, and software tools directed at ensuring that funds are used to assist affordable housing participants and reduce improper payment errors. In November 2011, the department used contractors to develop four prototype software tools aimed at demonstrating anticipated NGMS functionality for voucher programs. However, in July 2012, the department determined that the prototypes that had been developed would not address its business needs. As a result, the department initiated planning efforts to restructure the modernization effort and expand the scope to include all Public and Indian Housing lines of business. HUD has reported that the aim of the restructured effort is to enhance the department’s affordable housing program, improve end-user satisfaction, streamline complex business processes, and integrate disparate IT systems into a common, modernized platform. The department intends for NGMS to support efforts to improve HUD’s financial accountability by more accurately quantifying budgetary data resources, measuring program effectiveness, and justifying the agency’s budget formulations and requests. NGMS is expected to help department personnel reduce improper payments by identifying anomalies in operating costs, reserves, and subsidy payments. Once implemented, NGMS is intended to provide staff with a new set of monitoring, oversight, and analysis tools to ensure that allocated federal funds are used efficiently to assist affordable housing participants. The department is taking an incremental approach to developing NGMS and expects to deliver initial functionality by August 2013. NGMS system and software development projects are designed to support four functional areas: Financial management: Provide automated processes for budget forecasting and formulation and cash management based on real-time data that are expected to allow the department to anticipate cash flow needs through precise scenarios and disburse funds on the basis of project and tenant records, eliminating reconciliations. For example, the Budget Forecasting and Formulation project is intended to develop a solution that will include forecasting functionality, data aggregation, and analytics to support the budget development process for Public and Indian Housing programs such as vouchers, administrative fees, family self-sufficiency, mainstream vouchers, and housing assistance programs. In addition, this functional area is expected to migrate data from HUD’s Central Accounting and Program system and utilize information gathered from Public Housing Authorities regarding subsidized housing programs through an interface with the department’s New Core system. HUD operations: Provide a single point of access to data and information to improve efficiency and reduce administrative burden through a New Data Collection system that is expected to replace legacy systems (e.g., Public and Indian Housing Information Center) and provide new functionality for subsidized housing programs, geospatial data on physical housing, real-time occupancy information, and energy conservation measures for properties. In the interim, the Portfolio and Risk Management Tool project is expected to provide aggregated data about Public Housing Authorities through a standard business intelligence solution and is expected to expand its use to partner operations in the future. Partner operations: Expand the department’s operations system to provide a web-based single point of access for gathering consistent and accurate information from families and landlords to be used in the operation of public housing and voucher programs administered by Public Housing Authorities. Business support: Provide expanded access and use of NGMS IT solutions; grant HUD and program participants better access to information and technical assistance through a central point of access with live help and self-paced guides; and develop the necessary infrastructure and processes to enable timely and accurate answers to end users’ inquiries. Overall leadership for NGMS is provided by the General Deputy Assistant Secretary for Public and Indian Housing, who is the executive sponsor and chair of the modernization effort steering committee. The modernization effort also has a project management office that is responsible for executing and managing the associated projects. As of April 2013, NGMS consisted of six projects, of which five were included in our study. Table 2 summarizes the NGMS projects that we assessed. Effective use of project planning and management practices is essential for the success of modernization efforts such as those being undertaken by HUD. Our prior reviews of federal agencies have shown that, when effectively implemented, these practices can significantly increase the likelihood of delivering promised capabilities on time and within budget. Moreover, project management maturity is dependent on an agency’s standardization and institutionalization of such practices. PMI reported in its March 2013 annual survey of project management professionals that high-performing organizations are almost three times more likely than low-performing organizations to use standardized practices throughout the organization, and generate better project outcomes. To guide the application of best practices, we and others, including PMI and SEI at Carnegie Mellon University, have issued reports and frameworks for effective project management. These reports and frameworks emphasize practices that include the development of essential documentation needed for the execution and management of projects in the areas of project planning (charters, work breakdown structures, and project management plans), requirements management (requirements management plans and traceability matrixes), and acquisition planning (acquisition strategies). Project planning: This practice helps establish project objectives and outline the course of action required to attain those objectives. It also provides a means to track, review, and report progress and performance of the project by defining project activities and developing cost and schedule estimates, among other things. Project planning involves, for example, creating a charter to authorize project work, developing a work breakdown structure, and establishing project management plans that provide processes for measuring progress. Requirements management: Having a documented strategy for developing and managing requirements can help ensure that the final product will function as intended. Effective management of requirements involves assigning responsibility for them, tracking them, and controlling changes to them over the course of the project. It also ensures that each requirement traces back to the business need and forward to its design and testing. Requirements management practices call for the use of requirements management plans to provide a mechanism for documenting the process for managing requirements and associated traceability matrixes, which are intended to facilitate efforts to link requirements to identified business needs to help ensure that they will be satisfied by the end product. Acquisition planning: Effective IT project management also involves creating strategies to serve as the road map for acquisition planning. Such road maps are used for early planning of procurements and are developed by a project manager. Among other things, acquisition strategies should address plans for how projects will manage risks, deliverables, and reporting on contractor performance. In addition to calling for agencies to apply best practices, federal guidance, along with our framework for managing IT investments and our prior reviews of federal investments, outlines the importance of having reviews conducted by management at various points throughout a project’s life cycle. Such reviews are critical to helping ensure that cost, schedule, and performance goals for a project are satisfied, and they can provide early detection of risks and problems that could impede progress toward those goals. Further, management reviews can help ensure that appropriate quality standards are achieved and provide input for areas that need improvement. In order to better manage its modernization efforts, during 2011 HUD established new policies and procedures for executing and governing IT investments. Specifically, in April 2011, the department developed a Project Planning and Management (PPM) framework to provide guidance for managing a project’s life cycle in accordance with best practices. Using the framework, projects—such as those related to FHA Transformation and NGMS modernization efforts—are expected to proceed through life-cycle phases that require specific documents to demonstrate project activities and outcomes. The framework provides guidance through sample templates, with associated instructions and checklists that projects can use in developing their documentation. The framework also calls for management reviews that are intended to help ensure projects are aligned with the department’s architecture and technical standards, and that they have developed required information before committing resources to the next life-cycle phase. For example, at the initiation of a project, among other things, a charter and schedule are expected to be developed and approved by a review committee. In addition, during a project’s definition phase critical documents, such as a project management plan, requirements management plan, requirements traceability matrix, and an acquisition strategy, are also expected to be developed and approved by a review committee. In July 2011, HUD also established a governance policy that set forth processes, standards, roles, and responsibilities to facilitate decision making around investments, stakeholder relationships, project life-cycle management, and other important IT operational areas. In particular, the policy established an IT governance structure consisting of the Executive Investment Board, Customer Care Committee, Investment Review Subcommittee, and Technical Review Subcommittee. Figure 2 provides a simplified depiction of this governance structure. According to HUD’s Policy for Information Technology Governance handbook, these governance bodies have the following composition and responsibilities. Executive Investment Board: Comprised of senior leaders, including the HUD Secretary, Deputy Secretary, and Chief Information Officer, with responsibilities for providing strategic direction, managing the IT investment portfolio, and overseeing and approving projects that cost more than $5 million. Customer Care Committee: Comprised of executives, including the Chief Information Officer, the Chief Procurement Officer, and deputy assistant secretaries, who manage IT investments and perform project management oversight by reviewing and submitting recommendations to the Executive Investment Board, and coordinating with the subcommittees responsible for approving projects that cost between $500,000 and $5 million. respect to business cases and budget information for the Office of Management and Budget (OMB). Technical Review Subcommittee: Comprised of personnel from within OCIO, including the Chief Technology Officer, the Chief Architect, and the Chief Information Security Officer, with a focus on ensuring that the technical architecture is aligned with the department’s strategic goals and monitoring IT projects through conducting control gate reviews that assess whether all necessary documentation has been produced. The subcommittee is also responsible for approving projects that cost less than $500,000. For its FHA Transformation and NGMS modernization efforts, HUD has taken initial steps toward applying key project management practices in the areas of project planning, requirements management, and acquisition planning. However, the department has not yet fully implemented any of these practices in managing the 14 projects in our review. In large part, these deficiencies can be attributed to inadequate development and use of the department’s project management framework and governance structure. Without fully implementing these practices and effectively developing and using its framework and governance structure, HUD risks investing its resources on projects that may not meet critical mission needs. According to the Project Management Institute and the Software Engineering Institute, disciplined project management practices call for the development of project details such as objectives, scope of work, schedules, costs, and requirements against which projects can be managed and executed. This step can be facilitated by developing project artifacts that include, among other things, charters to authorize projects and assign responsibility for their execution, work breakdown structures to define the work that needs to be done to accomplish project objectives, project management plans to define how projects are to be executed and controlled, and requirements management plans to document the processes and methods to be used for developing and managing project requirements. Further, developing requirements traceability matrixes that provide linkages between business objectives and detailed system requirements, and establishing strategies to ensure adequate acquisition planning are practices that contribute to effective project management. Our prior reviews of federal agencies have shown that applying these practices can significantly increase the likelihood of delivering promised capabilities on time and within budget. HUD had taken initial steps in applying key project management practices by developing artifacts, to varying degrees, for the 9 FHA Transformation and 5 NGMS modernization efforts in our review. Nevertheless, the department lacked information needed for managing and executing the projects because the documentation developed did not contain a number of essential details that best practices stress as being critical to effectively defining a project and measuring its success. In this regard, none of the documentation included all of the critical information that could facilitate effective project management, such as full descriptions of the work necessary to complete the projects, cost and schedule baselines, or prioritized requirements. Table 3 summarizes the key project management practices in the areas of project planning (charters, work breakdown structures, and project management plans), requirements management (requirements management plans and traceability matrixes), and acquisition planning (acquisition strategies) for the 14 projects that we assessed. In addition, appendix III provides our more detailed assessment against best practices. A project charter formally authorizes a project and identifies high-level information that constitutes and assigns responsibility for project success. According to project management practices, to be effective, a charter should include, among other things, a project’s purpose or justification; high-level information on such factors as requirements and risks, measurable objectives, and related success criteria; a summary schedule and budget; project approval requirements (e.g., information on what factors will define project success and who will be responsible for final sign off at the completion of the project); and names, responsibilities, and authority levels of assigned leadership such as the project manager and sponsor. Of the 14 projects in our review, all 9 FHA Transformation and 3 of the NGMS projects had developed charters that included most of the relevant high-level information. For example, all of the charters included such information as the project purpose, description, and high-level risks and requirements, as well as the names of the assigned project managers. Regarding measurable objectives and success criteria, 10 of the projects included objectives, while 6 had related success criteria. Lastly, 10 project charters included a summary schedule, and 1 included a summary budget. While most of the charters contained high-level information, other essential details were not included such as the authority levels of project leaders and the requirements for approving the completion of the projects. Specifically, while each of the charters generally referenced HUD’s PPM framework and the associated governance committees, the charters did not explicitly state what results would constitute project success (e.g., a specified number of project objectives met) or what individuals or entities would be responsible for final sign-off at the completion of the project. For example, FHA Transformation’s LEAP Institution Manager project charter included the project’s purpose, high- level risks and requirements, and measurable objectives and related criteria. Specifically, the charter noted that, by the end of fiscal year 2014, the project would result in the retirement of four systems, eliminating the associated costs for operations and maintenance. The charter also incorporated a summary schedule and the names of its project manager and sponsor. However, while the charter included a total budget figure, it did not include details regarding the breakdown of the budget provided, the responsibilities and levels of authority given to the manager and sponsor identified, and the requirements for approving the completion of the projects. Additionally, the NGMS Integrated Budget Forecasting Model project charter provided the project’s purpose, high-level risks and requirements, the names of the project sponsors and managers, a summary schedule, and measurable objectives with related success criteria. In particular, regarding the measurable objectives, the charter stated that the project would reduce the average time to respond to ad hoc requests for budgetary reports and data from 3 days to 1 day. However, while the charter referenced the OMB Capital Asset Plan and Business Case Summary (Exhibit 300) for a list of associated costs, it did not include a summary of the project’s expected budget. This charter also did not include the responsibilities and the authority levels of the sponsors and managers or the project completion requirements. FHA Transformation and NGMS officials acknowledged the absence of these details in the charters and attributed the deficiencies to the general immaturity of the department’s project management practices. Regarding the remaining two NGMS projects for which charters had not yet been developed, project officials stated in April 2013, that one project was in the process of developing a charter, while relevant information for the other project was expected to be incorporated into the Budget Forecasting and Formulation charter. In the absence of charters that reflect all of the essential elements, HUD lacks clear definitions of what will constitute success for its modernization projects and has less ability to hold the responsible officials accountable for this success. Moreover, the lack of important details that a charter is intended to provide at the initial authorization of a project makes it more difficult to undertake other project planning activities, such as developing work breakdown structures, project management plans, and requirements. A work breakdown structure is the cornerstone of every project because it defines in detail the work necessary to accomplish a project’s objectives and provides a basic framework for a variety of related activities like estimating costs, developing schedules, identifying resources, and determining where risks may occur. According to best practices, this artifact should be a deliverable-oriented hierarchical decomposition of the work to be executed by the project team to accomplish the project’s objectives. Moreover, best practices state that it should represent the entire scope of the project and product work, including project management, and it should be standardized to enable an organization to collect and share data among projects. In addition, it should be accompanied by a dictionary that describes in brief narrative form what work is to be performed in each of the various work breakdown structure elements. Of the 9 FHA Transformation and 5 NGMS projects, none had developed complete work breakdown structures and associated dictionaries; only one NGMS project—Budget Forecasting and Formulation—had a draft work breakdown structure and associated dictionary. However, while this draft work breakdown structure included details regarding the first increment of the project, neither it nor the associated dictionary provided details for any of the future planned increments. Thus, it did not reflect the entire scope of the project and lacked descriptions of the work that would be performed following the first increment, which is expected to deploy initial functionality by late summer of 2013. Further, rather than being organized by deliverables—that is, unique and verifiable products, results, or capabilities—the draft was organized by life-cycle phases such as definition and design. As a result, it did not allow for progress to be measured by deliverable, which would enable more precise identification and effective mitigation of the root causes for any cost or schedule overruns. Moreover, developing a deliverable-oriented work breakdown structure would show how deliverables relate to one another as well as to the overall end product. According to NGMS officials, plans are under way to fully develop work breakdown structures that represent the first and second increments of all projects in late spring 2013. Notwithstanding these plans, as of April 2013, a specific time frame for developing the work breakdown structures and associated dictionaries for the third increment of NGMS projects had not yet been determined. Further, regarding the 9 FHA Transformation projects, in April 2013 officials stated that a work breakdown structure and dictionary to represent the entire modernization effort is being developed. However, the department was not able to provide a specific date for when this documentation would be completed. NGMS and FHA Transformation officials stated that work breakdown structures were not initially developed for the projects because the PPM framework did not require the completion of this artifact. The officials added that, in addition to HUD project management practices lacking maturity, their staff had not yet developed the expertise required to create this artifact. Until FHA Transformation and NGMS develop deliverable-oriented work breakdown structures and associated dictionaries for all of their projects, these efforts will lack critical information for understanding the detailed work that needs to be performed to accomplish project objectives. Further, by not defining the work to be performed, HUD cannot provide reasonable assurance that cost and schedule estimates will capture all the relevant information needed for the management of these efforts. According to project management practices, a project management plan is the primary source that defines how a project is to be executed and controlled. Best practices emphasize the importance of having such plans in place to, among other things, establish a complete description that ties together all project activities and evolves over time to continuously reflect the current status and desired end point of the project. Moreover, these practices state that to be effective, a project management plan should identify life-cycle processes to be applied, outline plans for project tailoring (i.e., determining what processes and documentation would be necessary to accomplish project objectives), provide communication techniques to be used, and list management reviews. Further, building on the initial summary schedule and budget in the charter, the project management plan should include baseline cost and schedule estimates developed during planning activities. Moreover, this baseline information should be updated as needed and periodically compared with actual performance data in order to track and report progress. Finally, the plan should include, or make reference to, subsidiary management plans that describe how subordinate activities are to be carried out for the project. Six FHA Transformation and 5 NGMS projects had drafted or completed project management plans that outlined life-cycle processes, identified communication techniques and management reviews, and incorporated certain subsidiary management plans. For example, the plan for FHA Transformation’s Legacy Application Transformation project included an approach for tailoring the life-cycle processes to be used; contained a communication table with details about what techniques would be used; and described different types of management reviews, including an official team review and a structured walkthrough. Similarly, the plan for NGMS’s Integrated Budget Forecasting Model project indicated that the project was following HUD’s PPM framework, which includes tailoring life- cycle processes; contained communication techniques; and identified different types of management reviews, including audit reviews and post- project reviews. However, the plans provided for the 11 projects lacked other essential information. Specifically, they did not clearly identify cost and schedule baselines or consistently incorporate subsidiary plans. For example, the plan for FHA Transformation’s MFH Development & Underwriting Business Process Reengineering/Automated Underwriting Solution project listed milestones, such as implementing a solution by July 2013, and referenced a total cost of ownership artifact that indicated a cost of $1.9 million. Yet, the plan did not indicate if these were considered to be the project’s schedule and cost baselines against which progress would be measured. The impact of the lack of clear baselines was evidenced by inconsistencies between the project management plan and other project documentation. For example, weekly status reports indicated that the solution would be implemented by November 2013, rather than by the July 2013 date identified in the project management plan. This lack of clarity regarding the project’s cost and schedule baseline makes it difficult to accurately measure and report progress against commitments made to deliver functionality. In a similar example, the project management plan for NGMS’s Integrated Budget Forecasting Model provided project milestones and identified cost estimates by life-cycle phase, but it did not specify if these figures represented cost or schedule baselines developed as part of planning activities. Additionally, the plan that reflected the other 4 NGMS projects in our study included subsidiary plans for requirements, scope, schedule, cost, quality, human resources, and risk management, but it did not incorporate necessary details in the acquisition strategy and lacked one for process improvement. According to FHA Transformation and NGMS officials, the project management plans did not include cost and schedule baselines, in part, because the baseline information had been included in the updates provided to OMB. However, including a project’s cost and schedule baseline in a project management plan is important because the plan serves as a primary source of information used to execute and manage the project. In addition, such baseline information provides managers and sponsors the foundational basis for measuring project progress. Relying on information reported to an external entity such as OMB rather than on cost and schedule baselines to manage projects may not allow the project manager to have accurate real-time information available when responding to stakeholder interests regarding the status of project progress. Regarding the incorporation of subsidiary plans, the officials stated that these plans were not required by the PPM framework to complete control gate reviews and, as a result, were not fully addressed in all of the project plans. Further, for the remaining 3 FHA projects that had not yet developed project management plans, FHA Transformation officials said the projects were still completing initial planning activities. In accordance with the PPM framework, the projects would be expected to develop plans when those activities are completed. Until FHA Transformation and NGMS have comprehensive project management plans that reflect cost and schedule baselines and fully incorporate subsidiary plans for process improvement and acquisition management, these modernization efforts will continue to lack a foundational tool needed for successfully managing their projects and for providing stakeholders with insight into the status of the projects. According to project management practices, effective planning of requirements includes documenting the processes and methods to be used for developing and managing requirements from initial identification through implementation. Such practices state that requirements management plans should incorporate the approach for how requirements development activities (e.g., collecting requirements) will be conducted and how changes will be managed; identify methods for prioritizing requirements; and specify the metrics to be used to measure products against identified requirements, among other things. As we previously reported, effective planning for requirements development and management activities can reduce the risk of cost overruns, schedule delays, and limitations in system functionality. Seven of the FHA Transformation and all 5 of the NGMS projects in our review had developed requirements management plans that documented how requirements development activities would be conducted, including managing changes. For example, the FHA Transformation’s Healthcare Automated Lender Application Pilot project plan outlined processes for how changes to requirements would be, among other things, evaluated upon submission and analyzed for determining their impact on original requirements in order for decisions to be made regarding proposed changes. In addition, the NGMS Integrated Budget Forecasting Model project plan indicated that requirements would be gathered through interviews between the contractor and stakeholders, and provided detailed policies and procedures for developing and maintaining requirements. However, only 1 of the 12 projects—LEAP Automation of Lender Approval Workflow—identified methods for prioritizing requirements, and none of the projects established metrics for determining the extent to which the products developed addressed requirements. The over 2,400 functional requirements identified for the NGMS Budget Forecasting and Formulation project illustrates the significance of this point as the lack of prioritization could heighten the difficultly developers may face in determining which among the many requirements to focus on first. FHA Transformation and NGMS officials stated that they had followed the PPM framework template to develop the requirements management plans for their projects. However, they added that the framework did not call for prioritization methods to be identified in the requirements management plan, and the department’s governance committee responsible for project oversight did not provide feedback to indicate that the plans needed to include this information. As of late April 2013, according to FHA Transformation officials, the remaining 2 projects had not yet developed requirements management plans because the projects were still in initial planning. Without establishing methods for prioritizing requirements, the department will lack vital information needed to allocate resources in a manner which ensures that higher-priority requirements are addressed before lower-priority ones. In addition, until metrics for determining how products address requirements are established, the department lacks the ability to ensure that products will address business needs. As a result of these missing details, HUD increases the risk that implemented solutions may not effectively support the department’s mission. According to best practices, the development of a requirements traceability matrix is intended to link business needs outlined in high-level requirements to more detailed requirements. Traceability refers to the ability to follow a requirement from origin to implementation and is critical to understanding the interconnections and dependencies among the individual requirements and the impact when a requirement is changed. Requirement matrixes provide tracing to, among other things, business needs and the criteria used to evaluate and accept requirements. Further, the use of attributes (e.g., a unique identifier, priority level, status, and completion date) in the matrix helps define the requirement to facilitate traceability. As we have reported, establishing and maintaining traceability is important for understanding the relationships among requirements—from the point at which business requirements are initially established through the execution of test cases to validate the resulting product. Six FHA Transformation projects and 2 NGMS projects had developed requirements traceability matrixes to track their requirements. However, the eight matrixes that had been developed varied in the extent to which project requirements linked detailed functional requirements backwards to high-level business needs and forward to implementation. Further, attributes intended to allow the original business needs to be traced to detailed functional requirements were incomplete or missing. For example, the FHA Transformation Portfolio Risk Reporting & Analytics project matrix supported traceability of requirements back to higher-level business goals and also provided specific attributes such as unique identifiers. However, the matrix did not link the high-level requirements outlined in the project’s requirements definition documentation to more detailed requirements or trace to documentation that described criteria to be used for evaluation and acceptance of requirements. In addition, the NGMS Integrated Budget Forecasting Model matrix included requirements that were traced from high-level to more detailed requirements and recorded specific attributes such as a unique identifier and the current status. However, the matrix did not provide traceability to criteria for evaluating and accepting requirements or consistently record accurate information regarding the current status of the requirements. For instance, the matrix included two identical requirements, but each stated requirement had a different disposition: the information for one requirement indicated that it had been “completed” while information for the other requirement included a notation of “discontinued,” but without any associated dates to clarify which disposition accurately described whether the requirement had been implemented. Further, the NGMS Budget Forecasting and Formulation project developed a matrix that used a unique identifier that allowed traceability from 15 high-level requirements to more detailed functional requirements. In addition, these requirements and the traceability matrix were approved by the appropriate stakeholders. However, the matrix did not document several other attributes, including status, or provide traceability to criteria for the evaluation and acceptance of these requirements. In particular, the matrix did not establish priorities for requirements to aid in ensuring that those of highest priority are addressed first. According to FHA Transformation and NGMS officials, the PPM framework guidance was used in creating the matrixes and, in many cases, the projects relied on contractors to complete the artifacts. In addition, according to these officials, project resources were focused on providing the documentation required by the framework and associated governance committee. As a result, information that was not explicitly identified as being required in an artifact, such as matrixes that demonstrate traceability, was not developed. With regard to the remaining 3 FHA Transformation and 3 NGMS projects, department officials said the projects were still completing initial planning activities and had not reached a point where requirements had been defined to populate a matrix. The incomplete state of the requirements traceability matrixes makes it unclear what mission needs have been addressed by project functional requirements and are planned to be implemented in a solution. Without fully traceable requirements for each project, the FHA Transformation and NGMS modernization efforts are limited in their ability to know whether necessary requirements are being implemented or if those being implemented support defined business needs. Best practices also state that effective IT project management involves creating a strategy for acquisition planning. The strategy should be based on the needs of each individual project and can be formal or informal, and highly detailed or broadly framed. The strategy should also be incorporated as a subsidiary component of the project management plan. An acquisition strategy serves as the road map for effective acquisition planning and should document the types of contracts to be used, address contract risks, determine dates for deliverables, and coordinate contracts with other processes, such as scheduling and performance reporting. Additionally, the strategy should reflect early identification of metrics to be used in managing and evaluating contractors to help ensure that business needs are addressed through contracted support. FHA Transformation and NGMS each developed one acquisition strategy that was intended to represent all the projects being undertaken by their respective modernization efforts. In addition, while the acquisition strategy for NGMS was intended to represent all the projects, one project— Integrated Budget Forecasting Model—also developed its own individual strategy. These three strategies identified the types of contracts (e.g., time and materials, firm-fixed price, or interagency agreement) that were planned to be awarded for their associated projects. For instance, the FHA Transformation strategy stated that indefinite-delivery, indefinite- quantity contracts would be awarded and associated task orders would be firm-fixed price, time and materials, or labor hour. The NGMS strategy stated that it would utilize existing interagency agreements and work with small disadvantaged businesses for its contract needs. Further, the NGMS Integrated Budget Forecasting Model project’s separate acquisition strategy identified the type of contract to be used (i.e., blanket purchase agreement with firm-fixed price task orders), addressed contract risks (e.g., the unavailability of server space), and determined dates for deliverables (e.g., create and update detailed functional requirements between January 20 and February 10, 2011). Nonetheless, the three strategies did not fully document details needed for effective acquisition planning, such as information on how risks would be addressed, determining dates for deliverables, coordinating with other processes, and identifying metrics needed for evaluating contractors. For example, the FHA Transformation acquisition strategy identified dates for projects, but did not state how contract dates would be coordinated with schedule processes. Moreover, both the NGMS strategy and the individual strategy for the Integrated Budget Forecasting Model project did not state how other project processes, such as requirements development, would be coordinated with acquisitions or identify metrics for assessing contractors’ performance. FHA Transformation and NGMS officials stated that the strategies developed were based on the PPM framework template and that the strategies had been approved by the Technical Review Subcommittee, which did not identify the deficiencies. Further, while a strategy should guide acquisition planning, OCIO officials said the requirement in the PPM framework did not call for developing strategies prior to awarding contracts. Without strategies that guide planning activities in order to ensure that acquisitions are managed in accordance with other processes and provide performance metrics, the department increases the risk that acquisitions associated with its modernization efforts will not be effectively managed and that acquired services or products will not meet its needs. As previously discussed, HUD’s project management framework and associated governance structure was established to provide policies and procedures for managing the department’s IT investments. Specifically, the framework provides instructions, templates, and checklists intended to help ensure important details are incorporated for use during the execution and management of project activities. The department’s governance structure is responsible for ensuring that all necessary documentation is produced for all IT projects through control gate reviews conducted by the Technical Review Subcommittee. Officials responsible for the 9 FHA Transformation and 5 NGMS projects in our review stated that they relied on the department’s PPM framework to implement project management practices and the artifacts discussed in this report. However, guidance discussed in the framework did not always include essential information called for by best practices. For example, the guidance for developing requirements management plans did not specifically direct the projects to identify methods for prioritizing requirements. In addition, the projects did not develop strategies early enough to guide acquisitions because the framework did not call for the strategy to be developed until after projects completed initial planning activities. In other cases, where guidance was provided, FHA Transformation and NGMS projects did not always follow the guidance provided or adequately implement the tools provided by the framework in developing the documentation we examined. This was particularly evident in the development of work breakdown structures. For example, the PPM guidance included specific details regarding the importance of developing work breakdown structures as the basis for defining project work and establishing reliable cost and schedule baselines. However, as noted earlier, only 1 of the 14 projects in our study had drafted a work breakdown structure. Further, the project management plan template and guidance call for incorporation of cost and schedule baselines and approaches for how those will be managed for any given project. However, the project management plans we examined did not clearly incorporate such baselines or how they would be managed. Compounding the issue of inadequate development and use of the framework was the lack of evidence that the department’s governance bodies had provided adequate oversight to ensure compliance with project management practices. In particular, the department’s Technical Review Subcommittee did not express concerns regarding the alignment of FHA Transformation or NGMS documentation with the framework and, when issues were raised, the subcommittee had nonetheless allowed the projects to proceed. In doing so, the projects were able to move to the next control gate review without critical information—a practice which could result in projects proceeding for months without correcting flaws or inadequacies in information that was vital to effective project management. Specifically, in examining documentation for control gate reviews, we found that the Technical Review Subcommittee did not consistently operate as intended or use the guidance provided in the department’s framework. While the department’s framework outlines processes for conducting control gate reviews of projects and provides templates to be used, the reviews were conducted without using the framework guidance. For example, the control gate review procedures state that documentation should be assessed based on (1) its accuracy in capturing necessary information for the project’s development, (2) its completeness with a level of detail sufficient to provide correct and relevant information, and (3) the adequacy of information in the artifact to make it actionable and informative. The framework also provided a decision document intended to capture any issues or concerns identified by the subcommittee. However, it was not evident that any of the control gate reviews conducted from 2011 through 2012 had assessed the documentation against the outlined criteria or that the decision document was used. For example, during this time, none of the control gate review documents provided for FHA Transformation and NGMS included an assessment of the documentation against the criteria in the control gate review procedures, and meeting minutes or e-mails were used to record high- level issues or concerns identified instead of the more detailed information called for in the decision document. According to responsible OCIO officials, the subcommittee did not assess compliance with the framework, but was focused on reviewing the technical aspects of IT projects. The officials also noted that the subcommittee did not have the staff needed to fully implement the control gate review guidance included in the framework, but that it did look to see if the identified artifacts were developed for each project. Further, these officials stated that it was the responsibility of the project managers and their teams to address issues identified before the next control gate review, but that the subcommittee did not enforce any specific deadlines. Based on our assessment of the control gate review documentation, as well as interviews with OCIO and modernization effort officials, it was not clear that the subcommittee consistently considered its role to include a full assessment of the artifacts for compliance with the framework outlined in control gate review guidance and templates. According to OCIO officials, the initial implementation of the framework focused on attempting to get projects to understand basic project management, and as a result, the department limited the focus of the first version of the framework. In April 2013, the department reported that it was working on a revised version of the framework that would be released in September 2013. However, the preliminary information provided regarding the revisions planned for the framework did not incorporate information to address all the deficiencies identified by project officials or highlighted in this report. For example, draft documents regarding the planned revisions did not explicitly state whether work breakdown structures and associate dictionaries would be required documentation to serve as the basis of cost and schedule baselines. Further, the preliminary information did not specify if methods for prioritizing requirements are to be incorporated into the requirements guidance. Additionally, as of February 2013, the department had assigned new leadership for managing the control gate reviews. According to this official, the control gates are expected to be revised to ensure that artifacts are evaluated and that the subcommittee takes a more active role in assessing the application of project management practices. However, the department did not state if it would clarify the Technical Review Subcommittee’s role or associated guidance outlined in the PPM framework or identify time lines for implementing the anticipated changes. Until HUD has a PPM framework for managing its projects that incorporates the abovementioned details, including clarifying the role of the Technical Review Subcommittee, and is appropriately used in managing its modernization efforts, the department increases the risk of continuing to inadequately apply project management practices and will not be positioned to effectively manage or report progress of its modernization efforts. HUD has taken steps toward applying best practices by establishing a framework for standardizing project management, and to varying degrees, the FHA Transformation and NGMS modernization efforts have developed basic documentation in the areas of project planning, requirements management, and acquisition planning. Notwithstanding these initial actions, the limited extent to which its modernization efforts implemented key practices in these areas puts its projects at an increased risk of failure. Specifically, the absence of complete information in foundational documentation intended to guide these efforts—such as project charters that define project success, deliverable-oriented work breakdown structures that detail the work needed to be accomplished, project management plans that include cost and schedule baselines, requirements management plans that provide methods for prioritizing requirements, traceable requirements to desired capabilities, and sound acquisition strategies that guide planning activities—means that HUD has not taken the steps to fully define its modernization efforts in terms of what they will accomplish, what steps are necessary to complete them, what they will cost, when they will be completed, what specific functionality is needed to meet their goals, and how contractors will be held accountable for performance. This indicates that, despite the steps that have been taken, the maturity of HUD’s project management practices does not sufficiently position the department to successfully carry out these efforts. Contributing to these deficiencies is that the department has not developed and used its project management framework in a manner that ensured the quality or completeness of project management documentation. Additionally, the lack of adequate oversight from the Technical Review Subcommittee resulted in projects not fully understanding how to develop complete artifacts. Until it addresses these weaknesses in applying project management practices, HUD may continue to invest resources in modernization projects that will not satisfy business needs and support its mission. Moreover, fully implementing effective project management practices is critical not only for the success of these modernization efforts, but also for that of the other five IT Transformation Initiatives or any other projects under way or undertaken in the future. To ensure that HUD effectively and efficiently manages its modernization efforts aimed at improving its IT environment to support mission needs, we recommend that the Secretary of Housing and Urban Development direct the Deputy Secretary to establish a plan of action that identifies specific time frames for correcting the deficiencies highlighted in this report for both its ongoing projects, as applicable, and its planned projects, to include developing charters that define what constitutes project success and establish accountability, finalizing deliverable-oriented work breakdown structures and associated dictionaries that define the detailed work needed to accomplish project objectives, completing comprehensive project management plans that reflect cost and schedule baselines and fully incorporate subsidiary management plans, establishing requirements management plans that include prioritization methods to be applied and metrics for determining how products address requirements, completing matrixes to include requirements traceability from mission needs through implementation, and establishing strategies to guide how acquisitions are managed in accordance with other processes and that performance metrics are established. Further, to improve development and use of the department’s project management framework, we recommend that the Secretary direct the FHA Transformation and NGMS steering committees to ensure that project management expertise needed to apply the guidance outlined in the framework is provided to execute and manage their respective projects; the Chief Information Officer to ensure that revisions to the framework incorporate specific information to address the areas of deficiency in project planning, requirements management, and acquisition planning identified in this report; and the Customer Care Committee to review the role and responsibilities of the Technical Review Subcommittee and ensure that the department’s governance structure operates as intended and adequately oversees the management of all of its modernization efforts. We provided a draft of this report to HUD for review. In response, HUD provided a letter, signed by the Acting Chief Information Officer, which included a chart containing the department’s written comments on the draft report. In the chart, the department outlined its views related to our four recommendations, and provided other comments and technical corrections on information in specific sections of the draft report, including the background and appendix I, our discussion of the findings on the development and use of HUD’s project management framework, and the report title page. The department’s comments are reprinted in their entirety in appendix IV. In commenting on our recommendations, the department discussed actions it was taking on various aspects of the first recommendation, but did not state whether or not it concurred with the entirety of the recommendation; stated that our conclusion leading to the second recommendation did not follow from the premises established in the draft report; and concurred with our third and fourth recommendations. Summaries of HUD’s comments for each recommendation, along with our responses, follow. With regard to the first recommendation—which called for the Deputy Secretary to establish a plan of action that identifies specific time frames for correcting the deficiencies highlighted in this report for both of its ongoing projects, as applicable, and its planned projects—the department noted activities that FHA Transformation expects to undertake in addressing the deficiencies for the six specific items listed as part of this recommendation. In this regard, the department stated that FHA Transformation acknowledged the need to update project charters and project management plans, develop deliverable- oriented work breakdown structures, examine and correct the requirements management plans and traceability matrixes, and work with support offices to ensure acquisition planning occurs at the earliest possible opportunity in the project’s life cycle. The department added that FHA Transformation had recognized the need to update its charters and project management plans well ahead of our draft report. Nonetheless, updated artifacts for FHA Transformation were not provided during our review. Moreover, the department did not address whether or how it intends to address deficiencies for its ongoing or planned projects, including those associated with the NGMS modernization effort. Accordingly, we maintain that it is important for HUD to establish a plan of action that identifies specific time frames for addressing the deficiencies in its IT projects. As acknowledged in the department’s comments, efforts to improve these project management practices could be applied to the other five IT Transformation Initiatives or any other projects under way or undertaken in the future. For our second recommendation, which called for the FHA Transformation and NGMS steering committees to ensure that project management expertise needed to apply the guidance outlined in the framework is provided to execute and manage their respective projects, the department contended that our conclusion leading to this recommendation did not follow from the premises established in the report. The department stated that it has ample talent and that providing additional talent would likely yield similar results regarding its deficiencies until the underlying steps are taken to apply effective project management practices. We agree that applying effective project management practices is important; however, in our view, it is essential for the FHA Transformation and NGMS steering committees to ensure that their respective modernization efforts have the expertise needed to do so, as it pertains to the development of tools such as work breakdown structures and requirements traceability matrixes. During our study, department officials stated on multiple occasions, that certain artifacts and practices were not implemented because staff lacked expertise in these areas. For example, both FHA Transformation and NGMS officials stated that their staff had not developed the expertise required to create work breakdown structures. Similarly, these officials stated that projects had relied on contractors to complete requirements traceability matrixes. Additionally, as we noted, the officials acknowledged that a lack of project management maturity was the cause of many of the deficiencies identified. Moreover, in its comments on this report, the department stated that staff training for the transition to applying the framework was limited. Thus, for these reasons, we believe our recommendation is valid and should be implemented. The department concurred with our third recommendation that the Chief Information Officer ensure that revisions to the framework incorporate specific information to address the areas of deficiency in project planning, requirements management, and acquisition planning identified in this report. In commenting on the fourth recommendation, the department concurred with the need for the Customer Care Committee to review the role and responsibilities of the Technical Review Subcommittee and ensure that the department’s governance structure operates as intended and adequately oversees the management of its modernization efforts. In other comments, the department stated that the discussion of the department’s project management framework did not recognize the difficulty of implementing this framework over the past 2 years. It stressed that tremendous effort had been made by the FHA Transformation and NGMS modernization efforts toward applying the framework while continuing to make progress on their related projects. It also stated that time is needed to fully incorporate the framework throughout the department on projects other than these modernization efforts. Acknowledging that the department has continued to take actions to improve its environment, the focus of our work for this report was on the implementation of project management practices for FHA Transformation and NGMS, specifically. As such, we did not assess the difficulties associated with improving the department’s overall capacity to manage its IT projects. We do agree that there are difficulties associated with applying project management practices while concurrently undertaking multiple modernization efforts and have previously reported on the progress HUD has made in addressing its limited capacity to manage and modernize its IT environment. Regarding the title page, HUD commented that modernization efforts historically account for a relatively small percentage of IT projects at the department, and that a more comprehensive perspective that accounts for all IT investments should be considered in the title of our report. Our objective for this report was specifically to identify the extent to which key project management practices were implemented for the FHA Transformation and NGMS modernization efforts. As such, this report did not evaluate all of the department’s IT investments. However, in this report, we do acknowledge the value of HUD applying these practices to all of its IT projects and moreover, we plan to undertake future work to evaluate the department’s institutionalization of its IT governance that we anticipate will be more comprehensive in assessing the department’s management of IT investments. Lastly, the department stated that the report should contain historical information illustrating the distribution of modernization funding in contrast to funding available for the operation and maintenance of IT. Toward this end, we assessed all relevant data that the department provided to us regarding its IT funding against the data that it reported to OMB. However, we found these data to lack consistency and concluded they were not sufficiently reliable for inclusion in our report. With respect to HUD’s technical corrections on the draft report, we have incorporated revisions, as appropriate. Specifically, in the background section and appendix I, we included a footnote to clarify that the Office of Program Systems Management is within the Office of Housing/FHA Office of Multifamily Housing Programs. We also updated the report section that discussed the development and use of HUD’s project management framework by removing the specific reference to the Deputy Chief Information Officer for IT Operations. In this same section, the department stated that OCIO did not concur with statements attributed to officials from the Technical Review Subcommittee. We modified the statements and the attribution in that section to represent more specifically what the officials stated. In doing so, we also further clarified the activities conducted by members of the Technical Review Subcommittee and comments provided by officials from the two modernization efforts. We are sending copies of this report to interested congressional committees. We are also sending copies to the Secretary of the Department of Housing and Urban Development and the Director of the Office of Management and Budget. Copies of this report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6304 or melvinv@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are acknowledged in appendix IV. Our objective was to identify the extent to which the Department of Housing and Urban Development (HUD) has implemented key project management practices for the Federal Housing Administration Transformation Initiative (FHA Transformation) and the Next Generation Management System (NGMS) modernization efforts. To address this objective, we examined all 14 projects for FHA Transformation and NGMS that had been identified in the department’s fiscal year 2011 expenditure plan. This included 9 FHA Transformation and 5 NGMS projects, which are identified in tables 1 and 2 of this report. Because HUD recently began implementing project management practices for its information technology (IT) modernization projects, we reviewed the implementation of practices during the initial phases of the projects’ life cycles; these practices establish the foundational plans and processes for managing projects throughout their life cycles. Specifically, we reviewed project planning and management practices essential for the success of modernization efforts in three areas: project planning, requirements management, and acquisition planning. We identified best practices in these areas included in the Project Management Institute’s (PMI) A Guide to the Project Management Body of Knowledge (PMBOK® Guide), the Software Engineering Institute’s (SEI) Capability Maturity Model® Integration for Development (CMMI-DEV) and for Acquisition (CMMI-ACQ), and GAO’s Cost Estimating and Assessment Guide (Cost Guide). For the 14 projects in our study, we assessed the three project management areas by reviewing six relevant documents to determine whether they contained essential information called for by best practices. Our assessment evaluated to what extent these documents (1) were developed and contained essential information, (2) were developed but lacked essential information, or (3) had not yet been developed. Specifically: To assess project planning activities, we determined whether projects had developed project charters, work breakdown structures, and project management plans, and when they had, we compared the contents of these documents with project management practices in order to determine the extent to which critical elements were incorporated or executed on the projects. Specifically, we assessed whether project charters addressed important elements such as the project purpose or justification, the project manager’s responsibility and authority level, and the name and responsibility of the project sponsor. We assessed whether the work breakdown structures were deliverable-oriented hierarchical decompositions of the work to be executed and had associated dictionaries. Finally, we assessed whether project management plans addressed important elements such as the project life cycle, results of project tailoring, cost and schedule baselines, and subsidiary management plans. To assess requirements management, we determined whether projects had developed requirements management plans and requirements traceability matrixes, and when they had, we compared the contents of these documents with best practices in order to determine the extent to which each program was applying specific elements. Specifically, we assessed whether requirements management plans addressed important elements such as configuration management activities, methods used to prioritize requirements, metrics, and a traceability structure. In addition, we assessed whether requirements identified in matrixes were, among other things, traceable to business needs, opportunities, goals, and objectives and whether the matrixes included essential information such as requirements change requests and status. To assess acquisition planning, we determined whether the modernization initiatives had developed acquisition strategies and, when they had, compared the contents of these documents with key practices to determine the actions HUD is taking to ensure that the acquisitions for FHA Transformation and NGMS are planned in accordance with best practices and guidance. Specifically, we assessed whether acquisition strategies addressed important elements such as the established dates for the contract deliverables, and procurement metrics. We interviewed relevant HUD officials and staff in the FHA Transformation and NGMS project offices, including the General Deputy Assistant Secretary for Public and Indian Housing, the Director for the Office of Program Systems and Management, the Deputy Director of FHA Transformation, and the NGMS Program Manager. In addition, we interviewed officials from the department’s Chief Procurement Office, including the Deputy Chief Procurement Officer, and the Office of the Chief Information Officer, including the Acting Deputy Chief Information Officer for Business and IT Modernization, to obtain information on how these offices support the work of the two modernization efforts. Further, we attended and observed project status meetings, and related review sessions conducted by senior leadership, including HUD’s Deputy Secretary. We determined that information provided by the department, such as work breakdown structures and requirements traceability matrixes, was sufficiently reliable for the purposes of our review. To arrive at this assessment, we conducted reliability testing by comparing information with statements from relevant department officials to identify discrepancies. However, we did not test the quality of certain information, such as cost and schedule data provided by the program offices. We conducted this performance audit from June 2012 to June 2013, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objective. We believe that the evidence obtained provided a reasonable basis for findings and conclusions based on our audit objective. Appendix II: Summary of HUD’s Transformation Initiative Information Technology Modernization Efforts Description Develop and implement a modern financial services IT environment to better manage and mitigate risk across FHA’s insurance programs for single-family housing, multifamily housing, and the insured healthcare portfolio. Reengineer processes and implement an automated system for managing grants that will reduce application time, eliminate manual paper processes, and increase the transparency of grant management information. Provide business intelligence and geospatial tools for documenting and assessing progress toward achieving strategic goals that will enhance transparency, reduce workload, increase employee productivity, and improve data quality. Integrate human resources systems and tools to allow for automated recruitment and hiring documentation, reduction of manual data entry, and accelerated candidate decision making. Integrate an acquisition management system that is compliant with federal regulations to reduce inefficiencies, time, and duplication in the procurement process across office locations to expedite services rendered to the public. Modernize and replace financial management systems through an outsourced shared services provider. Reengineer management processes to establish a technical infrastructure that will integrate disparate systems and provide consistent information in order to support rental housing assistance services. Our assessment of FHA Transformation and NGMS implementation of key project management documentation in the areas of project planning (charters, work breakdown structures, and project management plans), requirements management (requirements management plans and traceability matrixes) and acquisition planning (acquisition strategies) is described below. Best practices recognize the development of a project charter as an integral step in project planning for establishing and maintaining project teams. A charter formally authorizes a project and identifies high-level information that constitutes and assigns responsibility for project success. This is a critical artifact for creating project management plans, documenting business needs, and outlining the result a project is intended to achieve. Specifically, to be effective, project management practices state that a charter should include high-level requirements; high-level project description; high-level risks; measurable objectives and related success criteria; project approval requirements (e.g., what results in project success the project’s purpose or justification; summary milestone schedule; summary budget; and who is responsible for final sign off); assigned project manager, responsibility and authority level; and the name and authority of the sponsor or other person(s) authorizing the project charter. According to best practices, a work breakdown structure is the cornerstone of every project because it defines in detail the work necessary to accomplish a project’s objectives and provides a basic framework for a variety of related activities like estimating costs, developing schedules, identifying resources, and determining where risks may occur. In the context of the work breakdown structure, work refers to work products or deliverables that are the result of effort and not to the effort itself. Creating a work breakdown structure involves subdividing (or decomposing) project deliverables and work into smaller, more manageable components (called work packages) that can be scheduled, cost estimated, and managed. According to best practices, the work breakdown structure is a deliverable-oriented hierarchical decomposition of the work to be executed by the project team to accomplish the project’s objectives and create the required deliverables. Further, these practices state that a work breakdown structure should also represent the entire scope of the project and product work, including project management, and it should be standardized to enable an organization to collect and share data among projects. In addition, it should be accompanied by a dictionary of the various work breakdown structure elements that describes in brief narrative form what work is to be performed in each element. As we have previously reported, agencies need to develop comprehensive project management plans, and best practices emphasize the importance of having a plan in place that, among other things, establishes a complete description that ties together all activities and evolves over time to continuously reflect the current status and desired end point of the project. According to project management practices, a project management plan is the primary source that defines, among other things, how the project is to be executed and controlled. Project management plans can be either summary level or detailed and can be composed of one or more subsidiary plans to address elements of project management. To be effective, best practices state that a project management plan integrates cost and schedule baselines from planning activities, and this baseline information should be updated as needed and periodically compared with actual performance data in order to track and report progress. While the content of a project management plan will vary depending upon the type and complexity of a project, it is developed through a series of integrated processes and is progressively elaborated by updates during the execution and management of a project. Such plans include identification of the projected life cycle and processes to be applied to each phase; results of project tailoring; how the team will execute the work to accomplish project objectives; project cost and schedule baselines; how the team will maintain the integrity of performance measurement a change management plan that documents how changes will be baselines; the needs and techniques for communicating among stakeholders; key management reviews; a configuration management plan to define those items that are configurable, those items that require formal change control, and the process for controlling change to such items; and subsidiary management plans (scope, requirements, schedule, cost, quality, process improvement, human resources, communication, risk and procurement). According to project management practices, effective planning of requirements includes documenting the processes and methods to be used for effectively developing and managing requirements from initial identification through implementation. A project’s success is directly influenced by the care taken in capturing and managing requirements. Other essential planning activities such as developing a work breakdown structure or estimating a project’s cost and schedule are built upon requirements developed. Best practices state that in establishing requirements, project teams should plan requirements collection activities such as conducting interviews, focus groups, facilitated workshops, or other techniques, including surveys and prototypes. Depending on the type of project, the approach for managing requirements can vary, but requirements management plans document the approach for how requirements development activities will be conducted. In particular, a plan includes how requirements activities (e.g., collecting requirements) will be planned, tracked, and reported; configuration management activities such as how changes will be initiated, analyzed, and managed; requirements prioritization methods; product metrics that will be used and the rationale for using them; and a traceability structure outlining attributes for a traceability matrix and identifying what other project documents requirements will be traced to. Project management practices state that requirements traceability matrixes are designed to support backward traceability by linking each requirement to the broader business objective it supports and forward traceability by linking these requirements to more detailed functional requirements. Traceability refers to the ability to follow a requirement from origin to implementation and is critical to understanding the interconnections and dependencies among the individual requirements and the impact when a requirement is changed. This bidirectional traceability can help management determine whether the project addresses all requirements and that those requirements and the related deliverables are traceable back to valid business needs. According to best practices, requirement matrixes provide tracing to business needs, opportunities, goals and objectives; high-level requirements to more detailed requirements; criteria used for evaluation and acceptance of the requirements; a set of approved requirements; and status of requirement changes and requests. Further, matrixes should have specific attributes recorded for each requirement. Attributes associated with each requirement—such as a unique identifier, textual description, priority, version, current status, and date completed—should be recorded. According to best practices, effective IT project management also involves early planning for and management of acquisitions. The planning process begins with the identification of project needs which can best be, or must be, met by acquiring products, services, or results outside of the organization. During planning, coordination of the acquisition with other project management activities, such as budgeting, scheduling, resource estimating, risk identification, and requirements definition, should be discussed and documented. Most organizations have documented policies and procedures specifically defining mandatory acquisition activities for obtaining contracted goods or services. The acquisition planning process should result in a plan or strategy that describes how management decisions will be applied for a particular project. Such strategies serve as the road map for effectively planning and managing acquisitions from initiation through contract closure. In particular, project management practices indicate that acquisition strategies should provide guidance for defining the types of contracts to be used; addressing risk management issues; coordinating procurement with other project aspects, such as scheduling and performance reporting; setting the scheduled dates in contracts for determining dates for deliverables and coordinating with other project management processes; and establishing procurement metrics to be used in managing and evaluating contractors. In addition to the contact above, Teresa M. Neven (Assistant Director), Kami J. Corbett, Amanda C. Gill, Lee A. McCracken, John Ockay, and Shannin G. O’Neill made significant contributions to this report.
HUD relies extensively on IT to carry out its mission of strengthening communities and ensuring affordable housing and has reported that efforts are under way to modernize its aging, duplicative, and poorly integrated systems. Committee report language mandated GAO to evaluate the implementation of project management practices for HUD’s IT modernization efforts. The objective was to identify the extent to which the department implemented key project management practices for the FHA Transformation and NGMS modernization efforts. GAO assessed project management artifacts for 9 FHA Transformation and 5 NGMS projects in the areas of project planning (charters, work breakdown structures, and project management plans), requirements management (requirements management plans and traceability matrixes), and acquisition planning (acquisition strategies) against best practices. GAO also interviewed officials. The Department of Housing and Urban Development (HUD) has taken initial steps toward applying key project management practices in the areas of project planning, requirements management, and acquisition planning for its Federal Housing Administration Transformation (FHA Transformation) Initiative to address performance gaps in housing insurance programs and its Next Generation Management System (NGMS) to improve management of its affordable housing programs. However, HUD has not yet fully implemented any of these practices in executing and managing the information technology (IT) projects associated with these efforts. Specifically, while the department had developed project management artifacts such as charters and requirements management plans, none of these documents included all of the key details that could facilitate effective management of its projects such as full descriptions of the work necessary to complete the projects, cost and schedule baselines, or prioritized requirements, among other things. Department officials attributed these deficiencies to a lack of project management expertise. Because HUD has not taken these foundational steps to fully define its modernization efforts, the department is not well positioned to successfully manage or execute the associated projects. These incomplete documents limit the department's ability to fully understand the work to be completed or accurately report project progress. A major reason for these information deficiencies is HUD's inadequate development and use of its project management framework, which did not ensure the quality or completeness of artifacts developed. Specifically, the framework did not always include essential guidance and, in other cases, the projects did not always implement the guidance provided. Further, the governance structure did not consistently operate as intended to provide adequate oversight to ensure compliance with key project management practices. As a result, the department increases the risk of continuing to inadequately apply project management practices and may not be positioned to effectively manage or report progress of its key modernization efforts. Fully implementing effective project management practices is critical for the success of these two modernization efforts and others under way or planned. GAO recommends that HUD establish a plan of action to fully implement best practices, provide needed project management expertise, and improve the development and use of its project management framework and governance structure. In written comments, HUD concurred with the recommendations to improve its framework and governance, but did not concur with the entirety of the recommendation to develop a plan of action, and contended that the need for project management expertise did not follow from the premises established in the draft report. GAO maintains that these actions are necessary as discussed in this report.
Supports for low-income families are funded, designed, and administered by a combination of federal and state governments. Recent changes to federal laws have modified supports for low-income families in many ways and, in some cases, have altered the roles of the federal and state governments in the provision of these supports. Changing economic conditions have also affected the provision of supports for low-income families. Both the federal and state governments are involved in the provision of supports for low-income families, but the relative roles that the federal and state governments play with regard to funding and design vary by the type of support. Specifically, supports for low-income families vary in terms of whether they are funded with federal funds, state funds, or a combination; whether funding is fixed; and the extent to which the federal government, state governments, or a combination is responsible for determining eligibility rules, availability, and benefit structures. In addition, some supports, such as food stamps and Medicaid, are entitlements, for which eligible applicants are guaranteed receipt. For other supports, such as subsidized child care and transportation assistance, provision of the supports is not mandatory and receipt is not guaranteed. Table 1 illustrates the relative roles of the federal and state governments in the funding and design of supports, and indicates whether the supports are entitlements. Several federal programs for low-income families have been enacted or significantly revised in the last decade, as detailed below and in figure 1: 1990—Federal EITC expansion—In 1990, as part of the Omnibus Budget Reconciliation Act (1990 OBRA), the Congress changed the qualification standards and substantially increased the size of the EITC, at least in part to increase the progressivity of the overall federal tax system by reducing the federal tax burden of qualified low-income workers. In 1991, the first year that these changes were in effect, the number of families receiving the EITC increased by 1.4 million families to a total of 13.9 million, and they claimed a total of $11.2 billion in credits, which was an increase of $3.8 billion over 1990. 1993—Federal EITC expansion—As part of the August 1993 Omnibus Budget Reconciliation Act (1993 OBRA), the Congress increased the size of the maximum EITC for families with children, beginning in 1994, and extended coverage to very-low-income workers without children. The number of taxpayers claiming the EITC and total program costs increased steadily between tax years 1990 and 1994, partly because of both the 1990 and the 1993 OBRA expansions. 1996—PRWORA—With the enactment of the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (PRWORA), the Congress made sweeping changes to federal welfare policy for needy families. PRWORA ended the Aid to Families with Dependent Children program and authorized the TANF block grant to states at $16.5 billion annually. TANF provides temporary cash assistance and emphasizes work and responsibility over dependence on government benefits. PRWORA also combined several existing child care programs into one program designed to provide states with more flexible funding for subsidizing the child care needs of low-income families who are working or receiving education or training in preparation for employment. In fiscal year 2003, the Child Care and Development Fund (CCDF) provided states with up to $4.8 billion in federal funds for these purposes. In addition, PRWORA severed the link between cash assistance and Medicaid benefits and restricted legal immigrants’ access to public welfare benefits. 1997—SCHIP—The State Children’s Health Insurance Program (SCHIP) was created under Title XXI of the Social Security Act for states to offer coverage to children in families with incomes up to 200 percent of the federal poverty level (FPL) who do not qualify for Medicaid. Congress appropriated $40 billion in federal funds over 10 years (from fiscal year 1998 to 2007), to provide each state an annual allotment, which can be spent over 3 years, for SCHIP expenditures. State SCHIP expenditures are matched by federal payments up to the state’s annual appropriated allotment. The federal share of each state’s SCHIP expenditures ranges from 65 to 83 percent; the federal share of total SCHIP expenditures is about 72 percent. In designing their SCHIP programs, most states chose to establish separate, stand-alone components, often concurrent with a Medicaid expansion. 1998—WIA—The Workforce Investment Act (WIA) was passed in 1998 to consolidate services of many employment and training programs, mandating that states and localities use a centralized service delivery structure – the one-stop center system – to provide access to most federally funded employment and training assistance. Under WIA, the federal government appropriates funds to states each year, and states have three years to spend those funds. In each fiscal year from 2000 to 2002, approximately $3.9 billion in federal WIA funds was appropriated to the states. 1998—Quality Housing and Work Responsibility Act—Under the 1998 Quality Housing and Work Responsibility Act, provisions were put in place to provide public housing agencies with increased flexibility while also increasing accountability. In addition, the Act facilitated the implementation of mixed-income communities, aimed to reduce the concentrations of poverty in public housing, and created incentives and opportunities for residents to work and become self-sufficient. Further, the Act introduced a new Section 8 housing voucher program designed to be more market-driven and accommodated the replacement or revitalization of severely distressed public housing projects. Most provisions in the Act became effective October 1, 1999. 2001—Economic Growth and Tax Relief Reconciliation Act—As part of the 2001 Economic Growth and Tax Relief Reconciliation Act, the Congress introduced several marriage tax penalty relief provisions, including one that affects the structure of the EITC. This provision increased the EITC phase-out start and end points for married couple joint tax returns by $3,000, with the increase phased in over a 7-year period starting in calendar year 2002. 2002—Farm Bill Changes to Food Stamps—The Farm Security and Rural Investment Act of 2002, reauthorized the Food Stamp Program through fiscal year 2007. The law also introduced a variety of changes to the Food Stamp Program, including the expansion of eligibility for certain groups of noncitizens, the addition of a provision that allows states to provide “transitional” food stamp benefits for up to 5 months for families leaving TANF, and the addition of a number of other state options to ease access to the program and administrative burdens on applicants/recipients and program operators. Though the last decade brought significant economic expansion across the country, recently states have dealt with changing fiscal conditions, and consequently, states are now facing one of their most challenging budgetary situations in years. Most states are required to balance their operating budgets, and since their revenues have been much lower than forecast, state officials have struggled to bring expenditures into line with available resources. A state’s need to cut spending or increase revenues can be mitigated if it has accumulated surplus balances in reserve. States accumulated significant reserves during the late 1990s. However, these reserves have dropped appreciably as states address their fiscal crises. Because of the recent fiscal changes at both federal and state levels, support programs have also undergone cyclical spending changes in recent years. For example, because the amount of the TANF block grant is fixed, as caseloads decline—as they did in all states through the late 1990s—states have additional resources to expand their programs and create reserves. However, as caseloads increase—as they have in some states since 2000—or other factors cause program costs to rise, states bear most of their TANF program’s fiscal risks. States draw on a mixture of federal and state funds to provide low-income families with a wide range of supports, although the specific types of supports offered and the extent to which eligible families are able to receive the supports they seek vary by state and sometimes within states. The supports available to low-income families range from those that address basic needs to those intended to promote economic independence, and include subsidized child care, cash assistance, transportation support services, utility assistance, health services, job retention and advancement services, and tax credits, as well as various other supports. As shown in table 2, state officials responding to our 50- state survey reported using state funds and federal TANF funds for most or all of the supports listed, but they also used other federal funding sources specific to each type of support. In particular, states used Child Care and Development Fund (CCDF) and Social Services Block Grant (SSBG) funds for subsidized child care, Job Access and Reverse Commute (JARC) funds for transportation support services, Low-Income Home Energy Assistance Program (LIHEAP) funds for utility assistance, and WIA funds for both job retention and advancement services and transportation support services. State officials also reported that county or local funds were used for transportation support services. Clearly, of the supports listed in the table, transportation support services draw on the largest number of different funding sources, and of the federal funding sources identified, TANF funds appear to be the most flexible, as states are using them to provide several different types of supports in addition to cash assistance. Supports for low-income families are also administered at different levels of government within each state. In most states, officials reported that supports were administered at the state level, although in some states, county or local governments administered supports, as shown in table 3. States offered a wide variety of supports, although not every specific type was offered in every state, according to officials responding to our 50-state survey. (See table 4.) For example, most states subsidized several types of child care, subsidized individuals’ public transportation costs, and offered employment services in at least one location in the state, but somewhat fewer states subsidized child care for sick children, assisted with the purchase of used cars, or offered employment retention bonuses to parents who found and kept jobs. Many of the state officials responding to our survey also indicated that when their states do provide supports, the supports are often not available in all areas of the state, although most officials reported that there were not differences in access to supports in urban and rural areas. In several instances, state officials were not able to provide complete information on the extent to which supports were offered. According to data collected through our survey, although states may offer supports, not all eligible families who apply for supports receive them, as illustrated in figure 2. For the most part, state officials who could provide the data reported that a majority of eligible families who applied for supports did receive them, especially subsidized child care and utility assistance. However, it is worth noting that officials in some states reported that less than half of eligible applicants received certain types of transportation support services and job retention and advancement services. For nearly every type of support, an official in at least one state reported that less than half of eligible applicants received that type of support. The most common reasons cited for eligible applicants not receiving supports were an insufficient supply of services, insufficient state or federal funding, and the applicants’ physical or logistical difficulties gaining access to the supports that were offered. Figure 2 also illustrates that several officials responding to our survey did not know the extent to which eligible applicants received some types of supports. The officials reported, most frequently, that the reasons they did not have this information were that services varied broadly by locality and that data were not available or not complete at the state level. Further, figure 2 refers only to the eligible families who apply for supports and does not include families who would be eligible but who do not apply for them. In some cases, whether a family receives support services may depend on whether the family is receiving cash assistance. In the past, receipt of support services sometimes was linked to receipt of cash assistance, and as a result, cash assistance recipients may have been more likely to receive supports than other low-income families. However, as the emphasis of support programs has shifted toward promoting employment and economic self-sufficiency for a broader population, states have targeted some supports to low-income families who are not receiving cash assistance. In our 50-state survey, only a limited number of state officials were able to provide information on the extent to which low-income families receiving each type of support were also currently receiving TANF cash assistance. Among those who did provide this information, most reported that transportation support services and job retention and advancement services were received primarily by families also receiving TANF cash assistance, while subsidized child care and utility assistance were received primarily by families not receiving TANF cash assistance. In addition to the supports discussed above, states offer several other supports to low-income families. In particular: TANF cash assistance is provided in all states for eligible low-income families. Short-term cash benefits are provided to low-income families in 39 states, according to our survey. These benefits are provided through TANF diversion programs, state emergency assistance programs, or other programs. TANF diversion programs provide low-income families who are eligible for TANF cash assistance with short-term cash or in-kind benefits, on a case-by-case basis, in lieu of TANF cash assistance. State emergency assistance programs provide similar short-term support outside of TANF. State tax credits for low-income families were offered by almost half of the states in 2002, according to our survey, with the most frequently provided type of state tax credits—child care tax credits—provided by 23 states. In addition, 19 states reported offering a state earned income tax credit, and 7 states reported offering a housing credit. While Medicaid and SCHIP services are offered in nearly all states, 12 states reported in our survey that they offered additional health insurance programs so that low-income families not eligible for Medicaid or SCHIP could obtain health insurance for a reduced fee. Some other key supports for low-income families are available nationally, such as food stamps, the federal EITC, and housing assistance. In our site visits, several states mentioned other supports they consider to be important for low-income families, namely, before- and after-school programs and child support enforcement programs. Oklahoma contracts with several different organizations to provide after-school programs that focus on mentoring, teen pregnancy prevention, drug abuse prevention, and the overall goals of promoting child well-being and strengthening families. Several states consider child support to be a significant income support for welfare families. Wisconsin has established a unique program through waivers that allows welfare recipients to receive the entire amount of child support collected on their behalf each month. Though several states mentioned before- and after-school programs and child support efforts as important supports for low-income families, some of the states we visited noted supports that were more distinctive. For example, Oklahoma has gained national prominence because of its efforts to create programs that focus on supporting marriage and family formation through welfare reform. On the whole, the five states we visited structured their supports to serve a broad range of low-income families in a coordinated manner, although the specific structures varied by state and type of support. Officials reported that they structured the eligibility criteria and benefits of many supports in ways that allow them to serve families with different levels of income and employment. For example, while the income eligibility criteria for supports like TANF cash assistance typically limit receipt to families with the lowest incomes, the states we visited reported that for other supports, such as subsidized child care and transportation support services, the maximum income eligibility thresholds are often set at higher income levels in order to provide support for a broader range of low-income families, including some with earned income. Families with higher incomes, though, might receive smaller benefit amounts or might be required to pay for part of the cost of a service. State officials in the five states we visited also reported that they have made efforts to deliver supports to low-income families in a coordinated manner, such as by allowing families to access multiple supports through a single caseworker or a single application form. The five states we visited established income eligibility criteria that allow a broad range of families with different levels of income to gain access to supports. Because each state establishes its own maximum income eligibility levels for many supports, such as subsidized child care and utility assistance, the population eligible for each support differs across the states. As shown in table 5, in Oklahoma, families with incomes below 110 percent of the FPL are eligible for utility assistance, while in Wisconsin, families with incomes up to 150 percent of the FPL are eligible for this support. Overall, the five states we visited set the maximum income eligibility levels for many supports at 200 percent of the FPL or higher, as shown in table 5. In fiscal year 2003, 200 percent of the FPL was equivalent to approximately $31,000 for a family of three, which means that families whose annual incomes were less than or equal to $31,000 would be eligible for these supports as long as they met other eligibility criteria, such as having dependent children or not having other means of support. Setting higher income eligibility thresholds for some supports allows states to serve both families with very low incomes as well as families who may be working and earning somewhat higher incomes, which can assist families transitioning from welfare to work as well as other working families who have not received welfare. While New York, North Carolina, and Wisconsin set 200 percent of the FPL as the maximum income eligibility level for several supports, in Oklahoma and Washington, income eligibility criteria varied widely by support. These two states set the maximum income eligibility level at 200 percent of the FPL or higher for subsidized child care but set it lower for other supports. Washington officials reported that income eligibility criteria for supports in their state were deliberately graduated to ensure that as families’ incomes rose, they would not lose eligibility for several supports simultaneously. According to officials, this approach attempts to minimize the potential work disincentive associated with losing eligibility for several supports at once, as families with increasing earnings instead lose eligibility for supports gradually. Across the five states we visited, the form of supports for low-income families and the frequency of provision varied by state and support. Supports for low-income families can take several different forms, including cash benefits, vouchers, in-kind benefits, and services. For example, families might receive cash benefits through TANF cash assistance, vouchers to pay for public transportation, wood to heat their homes in the winter, or job-search assistance services. In addition, the frequency of support provision, or how often a family receives a support, varies depending on how the support is structured. Some supports, such as TANF cash assistance, are provided on a monthly basis, while other supports, such as utility assistance and tax credits, are provided on a one- time basis or once annually. When structuring supports, states also make decisions about the benefit amounts provided to eligible families. In the five states we visited, the average benefit amount provided to support recipients varied by state and support, as shown in figure 3. For example, though in all five of the states we visited the average monthly benefit for subsidized child care was larger than the average monthly benefits for other supports, the benefit value differed across states, with the most significant difference between two states equaling approximately $300. Although average benefits provide some idea of the value of each support to a recipient family, because many supports are structured to provide benefits to a broad range of families with different income levels and family sizes, individual family benefits often differ from the average family benefit. To determine each individual family’s benefits for supports, such as subsidized child care and TANF cash assistance, states often use a sliding scale, which adjusts the benefit amount received based on a range of factors, including family size and income. By using a sliding scale to determine benefit amounts, states are able to serve a broader range of low- income families with varied benefits. For other supports, such as utility assistance, while some states use a sliding scale method to determine each family’s benefit, other states provide each family with a flat grant. For example, North Carolina determines the flat grant for utility assistance recipients by dividing the number of eligible applicant families into the total funding available each year. When structuring benefit amounts, states also make decisions about the structures of payments to service providers and cost-sharing with recipient families. Though families receive benefits directly from the state for some supports, such as TANF cash assistance, states pay benefits through vouchers or directly to service providers for several other supports, such as subsidized child care and utility assistance. These provider payments consist of the family’s calculated benefit amount, and payments are also typically based on the rate charged by the provider for the service. For example, federal regulations direct states to pay market rates to child care providers receiving child care subsidies, but each state is responsible for completing its own market rate survey and determining what rates will be paid to each provider. In North Carolina and Oklahoma, child care centers are assigned “star” ratings based on quality and other factors, and the state sets provider payment rates based on type of provider, market rates, and star levels, such that higher-quality providers receive larger payments relative to other providers. Concerning cost sharing, state policymakers sometimes require families to pay part of the support cost, or a copayment, for services, as shown in figure 4. In the five states we visited, states typically pay a portion of each family’s cost for subsidized child care and SCHIP services, but some or all recipient families must also pay copayments for these services. By having either some or all recipient families pay copayments, the state is likely able to serve a broader range of families with available funds. For example, Wisconsin’s BadgerCare program, which provides health insurance for families whose incomes make them ineligible for Medicaid, requires recipients with incomes over 150 percent of the FPL to pay monthly premiums as well as copayments for certain BadgerCare services. Each of the five states we visited made efforts to deliver supports in a coordinated manner. In each of these states, several supports for low- income families were colocated at local offices, thereby providing families with a single access point for a variety of supports. Across the five states, supports that were typically colocated in local offices included TANF cash assistance and TANF diversion, subsidized child care, transportation support services, food stamps, Medicaid, and SCHIP. For example, in North Carolina, each local social service office includes staff members who assist with applications and determine eligibility for food stamps, TANF cash assistance, TANF diversion, subsidized child care, Medicaid, SCHIP, utility assistance, transportation support services, and emergency assistance. This colocation of supports at local offices is similar to our previously reported findings on the colocation of support services, such as food stamps, TANF cash assistance, and Medicaid, at WIA one-stop centers, which provide employment and training assistance. Though this trend toward increased colocation of supports seems to be taking place in many states in a variety of local offices, officials in several of the states we visited reported that housing assistance often is not colocated with other supports for low-income families, in some cases because the supports are administered by separate state or local agencies. When supports are colocated in a single location, it is likely that caseworkers also help coordinate the provision of supports for low- income families. In each of the five states we visited, state officials reported that the delivery of supports was sometimes coordinated among multiple caseworkers or directly coordinated by a single caseworker who provides families with case management services, assistance in identifying support needs, and eligibility determination. States cited several examples of coordinated case management, including the following: In Washington’s local offices, a single caseworker determines an applicant family’s eligibility for TANF cash assistance, food stamps, General Assistance, emergency assistance, and health insurance programs, such as Medicaid and SCHIP. North Carolina and Washington colocated substance abuse caseworkers in the local offices that provide TANF cash assistance in order to improve caseworkers’ abilities to coordinate the delivery of these services for families who need services from both programs. In contrast to these efforts to improve coordination between substance abuse caseworkers and staff delivering other supports for low-income families, Washington officials noted that less coordination existed between mental health staff and staff delivering other low-income supports. Wisconsin provides a case management program that assists low-income families not receiving TANF cash assistance with the coordination of supports. Wisconsin implemented this case management program in order to improve access and delivery of supports to low-income families who have left TANF cash assistance or are not receiving TANF cash assistance, as many studies have reported that these families are less likely to receive the supports for which they are eligible than are families receiving TANF cash assistance. To provide coordinated case management and streamlined supports, states typically combine funding streams from several different programs, which can prove challenging. For example, in 2002, Oklahoma combined funding streams from several different programs when the state adopted a “one family, one caseworker” philosophy for low-income families receiving TANF cash assistance, food stamps, and subsidized child care. Oklahoma officials reported that although they initially faced the challenge of determining how to allocate caseworker costs to each separate support program, officials addressed this challenge by surveying caseworkers engaged in the provision of these supports at several points in time to determine the amount of time they spent delivering each support. In three of the five states we visited, officials reported that integrated applications, which allow a family to apply for several supports at once, and integrated computer systems, which store information on recipients of several different supports, have been implemented to help coordinate the delivery of supports. In particular, families in Oklahoma apply for TANF cash assistance, subsidized child care, Medicaid, and food stamps through a single, comprehensive application. Further, though some state officials noted that the development of computer systems that simultaneously comply with the rules of several federal programs continued to be a challenge, Washington officials reported that they designed both an integrated application and a single computer system to coordinate the delivery of several supports for low-income families and to gather data on support recipients. In addition, Wisconsin has implemented a computer system that allows simultaneous application and eligibility determination for many supports for low-income families, excluding housing assistance and utility assistance. Concerning utility assistance, Wisconsin officials noted that the use of both a separate computer system and application somewhat prohibits its coordination with other supports, but the ease of applying for utility assistance on a straightforward application that gathers only the information related to a family’s eligibility for utility assistance may also improve families’ ability to access this support. In the states we visited, the delivery of some supports is also coordinated through categorical eligibility rules, which make recipients of certain supports automatically eligible to receive other supports. For example, in North Carolina, families who receive food stamps are automatically qualified to receive utility assistance and federal telephone assistance. Further, in Washington, families who receive any of the support programs administered by the Washington Department of Social and Health Services are automatically eligible to receive state-funded telephone assistance. This direct link between receipt of two or more separate support programs can facilitate low-income families’ access to these supports. Although efforts to deliver supports for low-income families in a coordinated manner were under way statewide in the five states we visited, because of local variation in offices and staff, the level of support coordination might differ within the state. For example, North Carolina officials reported that variation exists in how counties organize and coordinate the provision of food stamps with other support services. In all counties, food stamps are colocated in the same local offices with other supports. However, in some counties, separate staff provide each type of support, while in other counties individual staff provide both food stamps and other supports. Also, though efforts to coordinate the delivery of some supports were apparent in all five of the states we visited, state officials also reported instances where support coordination was not occurring or had been reduced and cited challenges to support coordination, such as the complexities of combining multiple funding streams and satisfying the various requirements of separate federal programs. Over the last several years, states have made substantial changes in their supports for low-income families, with most of these changes expanding the provision and receipt of supports, but state officials expressed uncertainty about their continued ability to provide the current level of support. Though many federal policy changes affecting support programs have occurred in the last decade, welfare reform played a central role in changes to a broad range of supports for low-income families. States made significant changes to the structure of their welfare programs in order to focus their new TANF cash assistance programs on the goals of employment and economic independence. To further this effort, states began spending increased amounts of funds on work supports for a broad range of low-income families. Since 2000, states have implemented many programmatic changes that affect the availability of supports for low- income families. While, in general, the availability of supports has increased during this time period, according to officials, as states have responded to recent fiscal constraints, they have made additional changes that limit the provision of some supports to low-income families. Further, as states plan for the future of supports in the current fiscal environment, officials reported that they are considering changes that would likely limit the availability and provision of supports for low-income families. Since the enactment of PRWORA, welfare caseloads have fallen dramatically, and TANF spending on support services for low-income families has increased. Under TANF, states have the flexibility to provide both income maintenance and work support services that help low-income families find and maintain employment. In addition, as allowed under the TANF block grant structure, states are also able to set aside or reserve TANF funds for use in later years. Figure 5 shows that as states implemented their TANF programs during the strong economy of the late 1990s, the number of TANF cash assistance recipients decreased significantly, while the annual amount of federal funds provided to the states for TANF remained constant, as provided for under the fixed amount of the block grant. This resulted in a significant amount of funds available to states for supports and other services or saving for future use. As TANF cash assistance caseloads fell, states shifted their spending priorities from cash assistance to support services. As illustrated in figure 6, states decreased the share of TANF expenditures for cash assistance between fiscal years 1998 and 2002 and increased the share spent on services. Specifically, spending on cash assistance decreased from 58 percent of TANF expenditures in fiscal year 1998 to 33 percent in fiscal year 2002. Over the same time period, the proportion of TANF expenditures on child care increased from 9 percent to 19 percent. The proportion of TANF expenditures for workforce development also increased, from 7 percent in 1998 to 10 percent in 2002. In addition to this increased emphasis on spending on supports, states reported leaving some TANF funds unspent, although the amount varied by state. Consistent with figure 6, several state officials reported that their support program expansions in the last several years were often funded with TANF dollars that the states had accumulated as a result of falling TANF cash assistance caseloads. However, some state officials responding to our survey indicated a reversal in this spending trend, which may be due in part to increasing cash assistance caseloads. Approximately half of the state officials responding to our survey reported that since 2000 the number of TANF cash assistance recipients had increased (23 states), while about half of the officials reported that the number of recipients had decreased (24 states). Officials from two states reported no change in the number of recipients. Officials from 9 states with increased cash assistance caseloads reported that between 2000 and the time of survey completion in spring 2003, funding of other supports was reduced in order to redirect funds to TANF cash assistance. Among these 9 states, TANF funding was most commonly reduced for job training, basic education for adults, and transportation, while funds were less often redirected from child care, job search, and case management, as table 6 displays. During our site visits, several officials explained that they no longer have sufficient TANF funds set aside to continue to fund support programs at current levels, which is consistent with TANF spending trends at the national level. As shown in figure 7, since 2001, states have spent more TANF funds than they received in their annual awards. To support this level of spending, states are drawing more heavily upon their TANF balances. Many states reported in our 50-state survey that the availability of supports and the number of families receiving supports have increased since 2000. Figure 8 shows that in most states the number of families receiving assistance with child care, transportation, utilities, and job retention and advancement increased. While the number of recipients can increase as a result of changes in the needs of the population, it can also increase because of changes in state policies that affect the availability of supports. States can expand or limit the availability of supports by increasing or decreasing the number of benefits and services available or the types of services provided. Most states reported that the number or types of child care subsidies, transportation support services, and job retention and advancement services stayed the same or increased between state fiscal year 2000 and spring 2003, an outcome that we have characterized as causing the availability of these supports to stay the same or increase, as shown in figure 9. Few states decreased the number or type of services provided, with the notable exception of Medicaid services, which were decreased in 16 states. Few changes were reported in the provision of state tax credits. According to officials responding to our 50-state survey, none of their state earned income tax credits, child care tax credits, or housing credits were eliminated, reduced, or suspended between state tax years 2000 and 2002. States can affect the availability of supports indirectly by changing low- income families’ awareness of supports through outreach efforts, such as billboards, fliers, and radio announcements. By increasing or decreasing outreach efforts, states may affect low-income families’ awareness of supports and the number of low-income families applying. States’ outreach efforts for most supports increased or stayed the same between state fiscal year 2000 and spring 2003, an outcome that we have characterized as causing availability to increase or stay the same, as shown in figure 10. Outreach efforts for Medicaid and SCHIP, however, decreased in 11 and 15 states, respectively. Officials in one of the states we visited explained that they had cut back on outreach efforts for their Medicaid and SCHIP programs because of budget constraints and a decrease in the number of doctors who would accept patients covered by Medicaid or SCHIP. Since 2000, states generally have modified income eligibility criteria in ways that expanded the availability of support services. However, some states reported changes to income eligibility criteria in recent years that limited the availability of some supports. (See fig. 11.) Changes to eligibility criteria often affect the number of families receiving supports, as such changes affect the size of the eligible population. In our site visit states, officials often noted that recent changes in federal support policies, such as those for Medicaid, SCHIP, and food stamps, have allowed states to expand their income eligibility criteria to cover a broader range of low- income families with these supports. Further, as shown in figure 11, most states responding to our 50-state survey reported that as a result of changes in income eligibility criteria between state fiscal year 2000 and spring 2003, the eligible populations for utility assistance, Medicaid, and SCHIP increased. For other supports, such as subsidized child care, transportation support services, and job retention and advancement services, survey responses were mixed, and though several states reported that the eligible population increased because of changes in eligibility criteria between state fiscal year 2000 and spring 2003, a substantial number of states reported that changes in eligibility criteria caused the eligible population to stay the same or decrease, as shown in figure 11. These mixed responses concerning changes in subsidized child care income eligibility criteria are similar to those we previously reported in May 2003. In that study, we surveyed subsidized child care officials directly about changes to income eligibility criteria between state fiscal year 2001 and the spring of 2003, and a majority of respondents reporting changes noted that these resulted in narrowed coverage. See U.S. General Accounting Office, Child Care: Recent State Policy Changes Affecting the Availability of Assistance for Low-Income Families, GAO-03-588 (Washington, D.C.: May 5, 2003), p.26. figure 12. A majority of the states responding to our 50-state survey reported that provider payments for SCHIP, Medicaid, job retention and advancement services, utility assistance, and subsidized child care increased between state fiscal year 2000 and spring 2003, though some states reported that provider payments for many of these supports decreased during the same time period. Regarding changes to copayments, most states responding to our survey reported that families’ copayments for SCHIP, Medicaid, and subsidized child care stayed the same between state fiscal year 2000 and spring 2003, while some states reported that families’ copayments increased during that time. We have classified increases in copayments as decreasing the availability of supports because as families’ copayments increase, fewer families may be able to afford to participate in the support program. (See fig. 13.) Both North Carolina and Washington officials reported in our site visits that since state fiscal year 2001, they have increased families’ copayments for subsidized child care. These findings are similar to those we previously reported that showed several states increased families’ copayments for subsidized child care between state fiscal year 2001 and the spring of 2003, resulting in decreased availability of subsidized child care. State officials also reported a few changes to the delivery of supports since 2000 in both written responses to our 50-state survey and our five site visits, and of those who reported changes to delivery, most of the changes expanded the provision of supports to low-income families. For example, Washington officials reported that the number of family violence counselors colocated in local offices with other supports for low-income families increased between state fiscal year 2000 and the spring of 2003. Similarly, South Carolina officials responding to our survey noted that they have expanded utility assistance delivery in recent years by adding more offices and staff and by colocating staff in WIA one-stop centers. Concerning transportation support services for low-income families, officials from both North Carolina and Georgia reported that they have made efforts to expand and coordinate services in recent years. In contrast, North Carolina officials also reported during our site visit that the number of substance abuse caseworkers colocated in local offices with other supports for low-income families was reduced in 2002 because of budget cuts. During our site visits, officials expressed concern that the progress they have made in recent years to promote employment and economic independence for low-income families may erode, given the fiscal crises that states currently face. Officials in several of the states we visited explained that their support program expansions in the last several years, which were funded with TANF dollars that the states had accumulated because of falling TANF cash assistance caseloads, may be at risk. These states reported that without sufficient TANF funds to continue these efforts, some support programs face elimination. Oklahoma officials explained that their budget cuts are due not only to declining TANF reserves, but also to decreased state revenues. Although Oklahoma still has TANF reserves, officials there stated that these would probably be depleted soon and they, too, might need to cut back on services that had been expanded. Many states added written comments to our 50-state survey that expressed concern about the future of supports. Half of the states surveyed reported that the current economic, budget, or funding situations in their states might limit the provision of supports in the near future. In addition, a small number of states reported that decisions had already been made to implement changes in supports between the summer of 2003 and the end of their state’s fiscal year 2004. These changes include reducing the number or type of services offered, changing the eligibility criteria to limit the number of families eligible for supports, decreasing payment amounts made to service providers, increasing the copayment amounts that families pay, and decreasing outreach efforts. Planned changes were particularly prevalent for Medicaid and subsidized child care programs. Overall, supports for low-income families have undergone many changes over the past several years, and they will likely continue to evolve as federal and state governments further develop policies and respond to cyclical fiscal conditions and changes in the demand for services. With a focus on promoting employment and economic independence, states have adjusted support programs to provide not only services to families receiving TANF cash assistance but also services to other low-income families not receiving TANF cash assistance. States have used TANF funds to experiment with new support programs and have recognized that supports like subsidized child care are an increasingly important support for low-income working families. Most recently, states have faced fiscal crises and tough choices about reducing their supports for low-income families. The emphasis on moving people into work, though, remains a priority. As states continue to adjust supports for low-income families in efforts to move forward with the reforms of the last decade and improve efficiency, access, and coordination, they will also continue to face the pressures of competing priorities and fiscal constraints. We provided a draft of this report to the Department of Health and Human Services (HHS) for the department’s review and comment. HHS agreed with the findings and conclusions of the report. HHS also noted that to address the fiscal uncertainty that some states face, reauthorization of the TANF and child care programs by the Congress will enable states to know with certainty the level of federal TANF and child care resources that will be available to support low-income families over the next 5 years. HHS’s written comments appear in appendix IV. HHS and an expert on supports for low-income families also provided technical comments, which we have incorporated where appropriate. We are sending copies of this report to the Secretary of HHS, relevant congressional committees, and others who are interested. Copies will be made available to others upon request, and this report will also be available on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202) 512-7215. Additional GAO contacts and acknowledgments are listed in appendix V. We designed our study to provide information on (1) the extent to which states provide supports for low-income families, (2) how states have structured programs to support low-income families, and (3) the changes states have made to supports for low-income families in recent years. To obtain information about these objectives, we conducted a mail survey of the social services agency directors in each state and the District of Columbia, conducted in-person interviews with state officials in five states, and reviewed information available from prior GAO work and relevant federal agencies. We conducted our work between December 2002 and November 2003, in accordance with generally accepted government auditing standards. To obtain information on the extent to which states provide supports for low-income families and how this has changed in the last few years, we conducted a survey of support programs in each state and the District of Columbia. We pretested our survey instrument with state social service directors in four states: Colorado, Delaware, Maryland, and Virginia. Surveys were mailed to state social service directors in April 2003, and follow-up phone calls were made to states whose surveys were not received by May 5, 2003. The survey was addressed to the state social service agency directors and instructed them to have the staff members most knowledgeable about their states’ support programs complete the survey. We received responses from the District of Columbia and all states except Michigan, providing a 98 percent response rate. We did not independently verify the information obtained through the survey. Data from the surveys were double-keyed to ensure data entry accuracy, and the information was analyzed using statistical software. The survey included questions about the provision and receipt of the states’ child care subsidies, transportation support services, utility assistance, job retention and advancement services, health assistance (including public health insurance, domestic violence programs, substance abuse treatment programs, and mental health treatment programs), and income assistance (including state tax credits, TANF cash assistance, and TANF diversion programs). The survey also included questions on recent changes in the availability and structure of these support programs. Respondents who frequently answered “don’t know” were prompted to answer questions regarding their reasons for this response. The officials reported most frequently that the reasons they did not have this information were that services varied broadly by locality and that data were not available or not complete at the state level. To obtain information about each assignment objective and, in particular, to gain a deeper understanding of how selected states have structured programs to support low-income families, we interviewed state officials in New York, North Carolina, Oklahoma, Washington, and Wisconsin. In selecting these states for our in-depth fieldwork, we included those that appeared, based on their federal and state TANF expenditures, to provide support services, and we also included states that, when viewed as a group, provide variation across characteristics such as state median income, poverty rate, population, and geographic location. The interviews were administered using a semistructured interview guide that included questions about the structure and receipt of states’ child care subsidies, transportation support services, utility assistance, job retention and advancement services, health assistance (including public health insurance, domestic violence programs, substance abuse treatment programs, and mental health treatment programs), and income assistance (including state tax credits, TANF cash assistance, and TANF diversion programs). The survey also included questions about efforts to coordinate supports and recent changes in the availability and structure of support programs. We also encouraged state officials to share information about any additional programs that they believed were important for low-income families in their states. During our site visits we spoke with program administrators or program analysts for each type of support program as well as budget and data analysts. For example, we spoke not only with social services officials, but in most states we also spoke with transportation officials, tax officials, Medicaid officials, and so on, if these supports were provided by separate state agencies. To ensure that our understanding of the availability and characteristics of supports for low-income families was accurate and objective, following our site visits we conducted phone interviews with advocacy organizations that either included low-income families in their membership or that work directly with low-income families in promoting issues related to supports. Some limitations exist in any methodology that gathers information about programs undergoing change, such as those included in this review. Results presented in our report represent only the conditions present in the states we visited at the time of our site visits, between December 2002 and April 2003. Although, as we have presented, state officials reported on their expectations of program changes in the near future, we cannot comment on any actual changes that may have occurred after our fieldwork was completed. Furthermore, we cannot generalize our findings beyond the five states we visited, but we have used these data for illustrative purposes. To obtain information about policies, participation rates, and other characteristics of the support programs that are administered largely at the federal level, such as food stamps, rental housing assistance, and the federal EITC, we reviewed reports and information readily available from prior GAO work and relevant federal agencies. To determine the completeness and accuracy of data obtained from HHS and Treasury, we reviewed related documentation and conducted tests of the data for obvious omissions and errors. In addition, we interviewed knowledgeable agency officials regarding the HHS data. We determined that the data were sufficiently reliable for use in this report. Tables 7 through 11 display individual state responses to survey questions regarding the extent to which eligible low-income families who apply for supports actually receive supports. These data are summarized graphically in figure 2 in the report. Tables 12 through 17 provide information on changes between state fiscal year 2000 and spring 2003 that states reported in the number of support recipients and in the number or type of services provided, state outreach efforts, eligibility criteria, provider payments, and families’ copayments. The data in these tables are summarized in figures 8 through 13 in the report. Kathy Larin, Angela Miles, Cathy Pardee, and Rachel Weber made significant contributions to this report. In addition, Alison Martin and Elsie Picyk provided technical assistance in the development and implementation of the 50-state survey, Patrick Dibattista provided writing assistance, and Marc Molino and Avy Ashery assisted with the graphics. Welfare Reform: Information on TANF Balances. GAO-03-1094. Washington, D.C.: September 8, 2003. Welfare Reform: Information on Changing Labor Market and State Fiscal Conditions. GAO-03-977. Washington, D.C.: July 15, 2003. Transportation-Disadvantaged Populations: Some Coordination Efforts among Programs Providing Transportation Services, but Obstacles Persist. GAO-03-697. Washington, D.C.: June 30, 2003. Child Care: Recent State Policy Changes Affecting the Availability of Assistance for Low-Income Families. GAO-03-588. Washington, D.C.: May 5, 2003. Child Care: States Exercise Flexibility in Setting Reimbursement Rates and Providing Access for Low-Income Children. GAO-02-894. Washington, D.C.: September 18, 2002. Workforce Investment Act: States and Localities Increasingly Coordinate Services for TANF Clients, but Better Information Needed on Effective Approaches. GAO-02-696. Washington, D.C.: July 3, 2002. Welfare Reform: States Provide TANF-Funded Work Support Services to Many Low-Income Families Who Do Not Receive Cash Assistance. GAO-02-615T. Washington, D.C.: April 10, 2002. Welfare Reform: States Provide TANF-Funded Services to Many Low- Income Families Who Do Not Receive Cash Assistance. GAO-02-564. Washington, D.C.: April 5, 2002. Human Services Integration: Results of a GAO Cosponsored Conference on Modernizing Information Systems. GAO-02-121. Washington, D.C.: January 31, 2002. Means-Tested Programs: Determining Financial Eligibility Is Cumbersome and Can Be Simplified. GAO-02-58. Washington, D.C.: November 2, 2001. Welfare Reform: Challenges in Maintaining a Federal-State Fiscal Partnership. GAO-01-828. Washington, D.C.: August 10, 2001 Welfare Reform: Improving State Automated Systems Requires Coordinated Federal Effort. HEHS-00-48. Washington, D.C.: April 27, 2000.
Over the last decade, the Congress has made significant changes in numerous federal programs that support low-income families, including changes that have shifted program emphases from providing cash assistance to providing services that promote employment and economic independence. As a result of some of the federal policy changes, the support system is more decentralized than before. This heightens the importance of understanding policy choices and practices at the state and local levels as well as those at the federal level. To provide the Congress with information on this system, GAO agreed to address the following questions: (1) To what extent do states provide supports for lowincome families? (2) How have states structured programs to support low-income families? (3) What changes have states made to supports for low-income families in recent years? Our review focused primarily on supports for which states make many of the key decisions about eligibility, benefit amounts, and service provision. To obtain this information, GAO conducted a mail survey of the social service directors in the 50 states and the District of Columbia; conducted site visits in New York, North Carolina, Oklahoma, Washington, and Wisconsin; and reviewed federal reports and other relevant literature. States use an array of federal and state funds to provide a wide range of benefits and services that can support the work efforts of low-income families, although the types of supports and coverage of the eligible population vary among the states and sometimes within states. For instance, most states subsidize several types of child care, subsidize use of public transportation, and offer employment services in at least one location in the state, but somewhat fewer states subsidize child care for sick children, assist with the purchase of used cars, or offer employment retention bonuses to parents who find and maintain jobs. The five states we visited structured the eligibility criteria and benefits of many supports in ways that allow them to serve a broad range of low-income families, including families on and off welfare and families who are working and those who are not currently working. The specific support structures vary, however, by state and type of support. These differences create a complex national picture of supports that provide an assortment of benefits and services to a range of populations. Over the last several years, many states have expanded the availability of supports that promote employment and economic independence for lowincome families. State officials reported that both the number of support services available and the number of recipients have increased. However, state officials express uncertainty about their continued ability to provide this level of support. As states plan for the future of supports in the current state fiscal environment, officials reported that they are considering changes that could limit the availability and provision of supports for low-income families. Overall, it its probable that the support system will continue to change as the federal and state governments further amend policies and respond to changes in the demand for services and cyclical fiscal conditions.
The total long-term funding for helping the Gulf Coast recover from the 2005 hurricanes hinges on numerous factors including policy choices made at all levels of government, knowledge of spending across the federal government, and the multiple decisions required to transform the region. To understand the long-term federal financial implications of Gulf Coast rebuilding it is helpful to view potential federal assistance within the context of overall estimates of the damages incurred by the region. Although there are no definitive or authoritative estimates of the amount of federal funds that could be invested to rebuild the Gulf Coast, various estimates of aspects of rebuilding offer a sense of the long-term financial implications. For example, early damage estimates from the Congressional Budget Office (CBO) put capital losses from Hurricanes Katrina and Rita at a range of $70 billion to $130 billion while another estimate put losses solely from Hurricane Katrina—including capital losses—at more than $150 billion. Further, the state of Louisiana has estimated that the economic effect on its state alone could reach $200 billion. The exact costs of damages from the Gulf Coast hurricanes may never be known, but will likely far surpass those from the three other costliest disasters in recent history—Hurricane Andrew in 1992, the 1994 Northridge earthquake, and the September 2001 terrorist attacks. These estimates raise important questions regarding how much additional assistance may be needed to continue to help the Gulf Coast rebuild, and who should be responsible for providing the related resources. To respond to the Gulf Coast devastation, the federal government has already committed a historically high level of resources—more than $116 billion—through an array of grants, loan subsidies, and tax relief and incentives. The bulk of this assistance was provided between September 2005 and May 2007 through five emergency supplemental appropriations. A substantial portion of this assistance was directed to emergency assistance and meeting short-term needs arising from the hurricanes, such as relocation assistance, emergency housing, immediate levee repair, and debris removal efforts. The Brookings Institution has estimated that approximately $35 billion of the federal resources provided supports longer-term rebuilding efforts. The federal funding I have mentioned presents an informative, but likely incomplete picture of the federal government’s total financial investments to date. Tracking total funds provided for federal Gulf Coast rebuilding efforts requires knowledge of a host of programs administered by multiple federal agencies. We previously reported that the federal government does not have a governmentwide framework or mechanism in place to collect and consolidate information from the individual federal agencies that received appropriations in emergency supplementals for hurricane relief and recovery efforts or to report on this information. It is important to provide transparency by collecting and publishing this information so that hurricane victims, affected states, and American taxpayers know how these funds are being spent. Until such a system is in place across the federal government, a complete picture of federal funding streams and their integration across agencies will remain lacking. Demands for additional federal resources to rebuild the Gulf Coast are likely to continue, despite the substantial federal funding provided to date. The bulk of federal rebuilding assistance provided to the Gulf Coast states funds two key programs—FEMA’s Public Assistance (PA) program and HUD’s Community Development Block Grant (CDBG) program. These two programs follow different funding models. PA provides funding for restoration of the region’s infrastructure on a project-by-project basis involving an assessment of specific proposals to determine eligibility. In contrast, CDBG affords broad discretion and flexibility to states and localities for restoration of the region’s livable housing. In addition to funding PA and CDBG, the federal government’s recovery and rebuilding assistance also includes payouts from the National Flood Insurance Program (NFIP) as well as funds for levee restoration and repair, coastal wetlands and barrier islands restoration, and benefits provided through Gulf Opportunity Zone (GO Zone) tax expenditures. The PA Grant program provides assistance to state and local governments and eligible nonprofit organizations on a project-by-project basis for emergency work (e.g., removal of debris and emergency protective measures) and permanent work (e.g., repairing roads, reconstructing buildings, and reestablishing utilities). After the President declares a disaster, a state becomes eligible for federal PA funds through FEMA’s Disaster Relief Fund. Officials at the local, state, and federal level are involved in the PA process in a variety of ways. The grant applicant, such as a local government or nonprofit organization, works with state and FEMA officials to develop a scope of work and cost estimate for each project that is documented in individual project worksheets. In addition to documenting scope of work and cost considerations, each project worksheet is reviewed by FEMA and the state to determine whether the applicant and type of facility are eligible for funding. Once approved, funds are obligated, that is, made available, to the state. PA generally operates on a reimbursement basis. Reimbursement for small projects (up to $59,700) are made based on the project’s estimated costs, while large projects (more than $59,700) are reimbursed based upon actual eligible costs when they are incurred. As of the middle of July 2007, FEMA had approved a total of 67,253 project worksheets for emergency and permanent work, making available about $8.2 billion in PA grants to the states of Louisiana, Mississippi, Texas, and Alabama. A smaller portion of PA program funds are going toward longer- term rebuilding activities than emergency work. Of the approximately $8.2 billion made available to the Gulf Coast states overall, about $3.4 billion (41 percent) is for permanent work such as repairing and rebuilding schools and hospitals and reestablishing sewer and water systems, while about $4.6 billion (56 percent) is for emergency response work such as clearing roads for access and sandbagging low-lying areas. The remaining amount of PA funds, about $0.2 billion (3 percent) is for administrative costs. (See fig. 1.) Of the funds made available by FEMA to the states for permanent rebuilding, localities have only received a portion of these funds since many projects have not yet been completed. Specifically, in Louisiana and Mississippi, 26 and 22 percent of obligated funds, respectively, have been paid by the state to applicants for these projects. The total cost of PA funding for the Gulf Coast hurricanes will likely exceed the approximately $8.2 billion already made available to the states for two reasons: (1) the funds do not reflect all current and future projects, and (2) the cost of some of these projects will likely be higher than FEMA’s original estimates. According to FEMA, as of the middle of July 2007, an additional 1,916 project worksheets were in process (these projects are in addition to the 67,253 approved project worksheets mentioned above). FEMA expects that another 2,730 project worksheets will be written. FEMA expects these worksheets to increase the total cost by about $2.1 billion, resulting in a total expected PA cost of about $10.3 billion. Some state and local officials have also expressed concerns about unrealistically low cost estimates contained in project worksheets, which could lead to even higher than anticipated costs to the federal government. A senior official within the Louisiana Governor’s Office of Homeland Security and Emergency Preparedness recently testified that some of the projects were underestimated by a factor of 4 or 5 times compared to the actual cost. For example, the lowest bids on 11 project worksheets for repairing or rebuilding state-owned facilities, such as universities and hospitals, totaled $5.5 million while FEMA approved $1.9 million for these projects. The extent to which the number of new project worksheets and actual costs that exceed estimated costs will result in demands for additional federal funds remains unknown. In addition, PA costs may increase until a disaster is closed, which can take many years in the case of a catastrophic disaster. For instance, PA costs from the Northridge earthquake that hit California in January 1994 have not been closed out more than 13 years after the event. Our ongoing work on the PA program will provide insights into efforts to complete infrastructure projects, the actual costs of completed projects, and the use of federal funds to complete PA projects. HUD’s CDBG program provides funding for neighborhood revitalization and housing rehabilitation activities, affording states broad discretion and flexibility in deciding how to allocate these funds and for what purposes. Congress has provided even greater flexibility when allocating additional CDBG funds to affected communities and states to help them recover from presidentially-declared disasters, such as the Gulf Coast hurricanes. To date, the affected Gulf Coast states have received $16.7 billion in CDBG funding from supplemental appropriations—so far, the largest federal provider of long-term Gulf Coast rebuilding funding. As shown in figure 2, Louisiana and Mississippi were allocated the largest shares of the CDBG appropriations, with $10.4 billion allocated to Louisiana, and another $5.5 billion allocated to Mississippi. Florida, Alabama, and Texas received the remaining share of CDBG funds. To receive CDBG funds for Gulf Coast rebuilding, HUD required that each state submit an action plan describing how the funds would be used, including how the funds would address long-term “recovery and restoration of infrastructure.” Accordingly, the states had substantial flexibility in establishing funding levels and designing programs to achieve their goals. As shown in figure 3, Mississippi set aside $3.8 billion to address housing priorities within the state while Louisiana dedicated $8 billion for its housing needs. Each state also directed the majority of its housing allocations to owner- occupied homes and designed a homeowner assistance program to address the particular conditions in their state. As discussed below, each state used different assumptions in designing its programs, which in turn affects the financial implications for each state. Using $8.0 billion in CDBG funding, the Louisiana Recovery Authority (LRA) developed a housing assistance program called the Road Home to restore the housing infrastructure in the state. As shown in figure 4, Louisiana set aside about $6.3 billion of these funds to develop the homeowner assistance component of the program and nearly $1.7 billion for rental, low-income housing, and other housing-related projects. Louisiana anticipated that FEMA would provide the homeowner assistance component with another $1.2 billion in grant assistance. Louisiana based these funding amounts on estimates of need within the state. Accordingly, Louisiana estimated that $7.5 billion would be needed to assist 114,532 homeowners with major or severe damage. Louisiana also estimated these funds would provide an average grant award of $60,109 per homeowner. The LRA launched the Road Home homeowner assistance program in August 2006. Under the program, homeowners who decide to stay in Louisiana and rebuild are eligible for the full amount of grant assistance— up to $150,000. Aside from the elderly, residents who choose to sell their homes and leave the state will have their grant awards reduced by 40 percent, while residents who did not have insurance at the time of the hurricanes will have their grant awards reduced by 30 percent. To receive compensation, homeowners must comply with applicable code and zoning requirements and FEMA advisory base flood elevations when rebuilding and agree to use their home as a primary residence at some point during a 3-year period following closing. Further, the amount of compensation that homeowners can receive depends on the value of their homes before the storms and the amount of flood or wind damage that was not covered by insurance or other forms of assistance. As of July 16, 2007, the Road Home program had received 158,489 applications and had held 36,655 closings with an average award amount of $74,216. With the number of applications exceeding initial estimates and average award amounts higher than expected, recent concerns have been raised about a potential funding shortfall and the Road Home program’s ability to achieve its objective of compensating all eligible homeowners. Concerns over the potential shortfall have led to questions about the Road Home program’s policy to pay for uninsured wind damage instead of limiting compensation to flood damage. In recent congressional hearings, the Executive Director of the LRA testified that the Road Home program will require additional funds to compensate all eligible homeowners, citing a higher than projected number of homeowners applying to the program, higher costs for homeowner repairs, and a smaller percentage of private insurance payouts than expected. According to the Federal Coordinator for Gulf Coast Rebuilding, CDBG funds were allocated to Louisiana on the basis of a negotiation with the state conducted between January and February 2006. That negotiation considered the provision of federal funding for the state’s need to conduct a homeowner assistance program covering homes that experienced major or severe damage from flooding. The state requested the allocation of federal funding at that time to expand the program to assist homeowners who experienced only wind damage. That request to provide federal funds to establish a homeowner program for homes which only experienced wind damage was denied, as were similar requests from Gulf Coast states such as Texas. The Administration requested the negotiated amount from Congress on February 15, 2006. Congress approved that amount and it was signed into law by the President on June 15, 2006. Subsequently, Louisiana announced the expansion of the Road Home program to cover damage exclusively from wind regardless of the stated intention of the federal allocation, but fully within their statutory authority. In addition, the Executive Director of the LRA testified that Louisiana had not received $1.2 billion in funds from FEMA—assistance that had been part of the Road Home program’s original funding design. Specifically, the state expected FEMA to provide grant assistance through its Hazard Mitigation Grant Program (HMGP)—a program that generally provides assistance to address long-term community safety needs. Louisiana had planned to use this funding to assist homeowners with meeting elevation standards and other storm protection measures, as they rebuilt their homes. However, FEMA has asserted that it cannot release the money because the Road Home program discriminates against younger residents. Specifically, the program exempts elderly recipients from the 40 percent grant reduction if they choose to leave the state or do not agree to reside in their home as a primary residence at some point during a 3-year period. Although we have not assessed their assumptions, recent estimates from the Road Home program and Louisiana’s state legislative auditor’s office have estimated a potential shortfall in the range of $2.9 billion to $5 billion. While these issues will not be immediately resolved, they raise a number of questions about the potential demands for additional federal funding for the states’ rebuilding efforts. Our ongoing work on various aspects of the CDBG program—including a review of how the affected states developed their funding levels and priorities—will provide insights into these issues. In Mississippi, Katrina’s storm surge destroyed tens of thousands of homes, many of which were located outside FEMA’s designated flood plain and not covered by flood insurance. Using about $3 billion in CDBG funds, Mississippi developed a two-phase program to target homeowners who suffered losses due to the storm surge. Accordingly, Phase I of the program was designed to compensate homeowners whose properties were located outside the floodplain and had maintained hazard insurance at a minimum. Eligible for up to $150,000 in compensation, these homeowners were not subject to a requirement to rebuild. Phase II of the program is designed to award grants to those who received flood surge damage, regardless of whether they lived inside or outside the flood zone or had maintained insurance on their homes. Eligible applicants must have an income at or below 120 percent of the Area Median Income (AMI). Eligible for up to $100,000 in grant awards, these homeowners are not subject to a requirement to rebuild. In addition, homeowners who do not have insurance will have their grant reduced by 30 percent, although this penalty does not apply to the “special needs” populations as defined by the state (i.e., elderly, disabled, and low-income). As of July 18, 2007, Mississippi had received 19,277 applications for Phase I of its program and awarded payments to 13,419 eligible homeowners with an average award amount of $72,062. In addition, Mississippi had received 7,424 applications for Phase II of its program and had moved an additional 4,130 applications that did not qualify for Phase I assistance to Phase II. The State had awarded 234 grants to eligible homeowners in Phase II with an average award amount of $69,448. The National Flood Insurance Program (NFIP) incurred unprecedented storm losses from the 2005 hurricane season. NFIP estimated that it had paid approximately $15.7 billion in flood insurance claims as of January 31, 2007, encompassing approximately 99 percent of all flood claims received. The intent of the NFIP is to pool risk, minimize costs and distribute burdens equitably among those who will be protected and the general public. The NFIP, by design, is not actuarially sound. Nonetheless, until recent years, the program was largely successful in paying flood losses and operating expenses with policy premium revenues—the funds paid by policyholders for their annual flood insurance coverage. However, because the program’s premium rates have been set to cover losses in an average year based on program experience that did not include any catastrophic losses, the program has been unable to build sufficient reserves to meet future expected flood losses. Historically, the NFIP has been able to repay funds borrowed from the Treasury to meet its claims obligations. However, the magnitude and severity of losses from Hurricane Katrina and other 2005 hurricanes required the NFIP to obtain borrowing authority of $20.8 billion from the Treasury, an amount NFIP will unlikely be able to repay while paying future claims with its current premium income of about $2 billion annually. In addition to the federal funding challenge created by the payment of claims, key concerns raised from the response to the 2005 hurricane season include whether or not some property-casualty insurance claims for wind-related damages were improperly shifted to NFIP at the expense of taxpayers. For properties subjected to both high winds and flooding, determinations must be made to assess the damages caused by wind, which may be covered through a property-casualty homeowners policy, and the damages caused by flooding, which may be covered by NFIP. Disputes over coverage between policyholders and property-casualty insurers from the 2005 hurricane season highlight the challenges of determining the appropriateness of claims for multiple-peril events. NFIP may continue to face challenges in the future when servicing and validating flood claims from disasters such as hurricanes that may involve both flood and wind damages. Our ongoing work addresses insurance issues related to wind versus flood damages, including a review of how such determinations are made, who is making these determinations and how they are regulated, and the ability of FEMA to verify the accuracy of flood insurance claims payments based on the wind and flood damage determinations. Congress has appropriated more than $8 billion to the U.S. Army Corps of Engineers (Corps) for hurricane protection projects in the Gulf Coast. These funds cover repair, restoration and construction of levees and floodwalls as well as other hurricane protection and flood control projects. These projects are expected to take years and require billions of dollars to complete. Estimated total costs for hurricane protection projects are unknown because the Corps is also conducting a study of flood control, coastal restoration, and hurricane protection measures for the southeastern Louisiana coastal region as required by the 2006 Energy and Water Development Appropriations Act and Department of Defense Appropriations Act. The Corps must propose design and technical requirements to protect the region from a Category 5 hurricane. According to the Corps, alternatives being considered include a structural design consisting of a contiguous line of earthen or concrete walls along southern coastal Louisiana, a nonstructural alternative involving only environmental or coastal restoration measures, or a combination of those alternatives. The Corps’ final proposal is due in December 2007. Although the cost to provide a Category 5 level of protection for the southeastern Louisiana coastal region has not yet been determined, these costs would be in addition to the more than $8 billion already provided to the Corps. The Corps’ December 2007 proposal will also influence future federal funding for coastal wetlands and barrier islands restoration. Since the 1930s, coastal Louisiana lost more than 1.2 million acres of wetlands, at a rate of 25–35 square miles per year, leaving the Gulf Coast exposed to destructive storm surge. Various preliminary estimates ranging from $15 billion to $45 billion have been made about the ultimate cost to complete these restoration efforts. However, until the Corps develops its plans and the state and local jurisdictions agree on what needs to be done, no reliable estimate is available. We are conducting work to understand what coastal restoration alternatives have been identified and how these alternatives would integrate with other flood control and hurricane protection measures, the challenges and estimated costs to restore Louisiana’s coastal wetlands, and the opinions of scientists and engineers on the practicality and achievability of large-scale, comprehensive plans and strategies to restore coastal wetlands to the scale necessary to protect coastal Louisiana. The Gulf Opportunity Zone Act of 2005 provides tax benefits to assist in the recovery from the Gulf Coast hurricanes. From a budgetary perspective, most tax expenditure programs, such as the GO Zones, are comparable to mandatory spending for entitlement programs, in that federal funds flow based on eligibility and formulas specified in authorizing legislation. The 5-year cost of the GO Zones is estimated at $8 billion and the 10-year cost is estimated to be $9 billion. Since Congress and the President must change substantive law to change the cost of these programs, they are relatively uncontrollable on an annual basis. The GO Zone tax benefits chiefly extend, with some modifications, existing tax provisions such as expensing capital expenditures, the Low Income Housing Tax Credit (LIHTC), tax exempt bonds, and the New Markets Tax Credit (NMTC). The 2005 Act increases limitations in expensing provisions for qualified GO Zone properties. The Act also increased the state limitations in Alabama, Louisiana, and Mississippi on the amount of LIHTC that can be allocated for low-income housing properties in GO Zones. Further, the act allows these states to issue tax-exempt GO Zone bonds for qualifying residential and nonresidential properties. Finally, the NMTC limitations on the total amount of credits allocated yearly were also increased for qualifying low-income community investments in GO Zones. We have a congressional mandate to review the practices employed by the states and local governments in allocating and utilizing the tax incentives provided in the Gulf Opportunity Zone Act of 2005. We have also issued reports on the tax provisions, such as LIHTC and NMTC, now extended to the GO Zones by the 2005 Act. Rebuilding efforts in the Gulf Coast continue amidst questions regarding the total cost of federal assistance, the extent to which federal funds will address the rebuilding demands of the region, and the many decisions left to be made by multiple levels of government. As residents, local and state leaders and federal officials struggle to respond to these questions, their responses lay a foundation for the future of the Gulf Coast. As states and localities continue to rebuild, there are difficult policy decisions that will confront Congress about the federal government’s continued contribution to the rebuilding effort and the role it might play over the long-term in an era of competing priorities. Congress will be faced with many questions as it continues to carry out its critical oversight function in reviewing funding for Gulf Coast rebuilding efforts. Our ongoing and preliminary work on Gulf Coast rebuilding suggests the following questions: How much could it ultimately cost to rebuild the Gulf Coast and how much of this cost should the federal government bear? How effective are current funding delivery mechanisms—such as PA and CDBG—and should they be modified or supplemented by other mechanisms? What options exist to effectively build in federal oversight to accompany the receipt of federal funds, particularly as federal funding has shifted from emergency response to rebuilding? How can the federal government further partner with state and local governments and the nonprofit and private sectors to leverage public investment in rebuilding? What are the “lessons learned” from the Gulf Coast hurricanes, and what changes need to be made to help ensure a more timely and effective rebuilding effort in the future? Mr. Chairman and Members of the committee, this concludes my statement. I would be happy to respond to any questions you may have at this time. For information about this testimony, please contact Stanley J. Czerwinski, Director, Strategic Issues, at (202) 512-6806 or czerwinskis@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Kathleen Boggs, Peter Del Toro, Jeffrey Miller, Carol Patey, Brenda Rabinowitz, Michelle Sager, and Robert Yetvin. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The devastation caused by the Gulf Coast hurricanes presents the nation with unprecedented challenges as well as opportunities to reexamine shared responsibility among all levels of government. All levels of government, together with the private and nonprofit sectors, will need to play a critical role in the process of choosing what, where, and how to rebuild. Agreeing on what the costs are, what federal funds have been provided, and who will bear the costs will be key to the overall rebuilding effort. This testimony (1) places federal assistance provided to date in the context of damage estimates for the Gulf Coast, and (2) discusses key federal programs that provide rebuilding assistance to the Gulf Coast states. In doing so, GAO highlights aspects of rebuilding likely to place continued demands on federal resources. GAO visited the Gulf Coast region, reviewed state and local documents, and interviewed federal, state, and local officials. GAO's ongoing work on these issues focuses on the use of federal rebuilding funds and administration of federal programs in the Gulf Coast region. To respond to the Gulf Coast devastation, the federal government has already committed a historically high level of resources--more than $116 billion--through an array of grants, loan subsidies, and tax relief and incentives. A substantial portion of this assistance was directed to emergency assistance and meeting short-term needs arising from the hurricanes, leaving a smaller portion for longer-term rebuilding. To understand the long-term financial implications of Gulf Coast rebuilding, it is helpful to view potential federal assistance within the context of overall estimates of the damages incurred by the region. Some estimates put capital losses at a range of $70 billion to more than $150 billion, while the state of Louisiana estimated that the economic effect on its state alone could reach $200 billion. These estimates raise questions regarding how much additional assistance may be needed to help the Gulf Coast continue to rebuild, and who should be responsible for providing the related resources. Demands for additional federal resources to rebuild the Gulf Coast are likely to continue. The bulk of federal rebuilding assistance provided to the Gulf Coast states funds two key programs--the Federal Emergency Management Agency's Public Assistance (PA) program and the Department of Housing and Urban Development's Community Development Block Grant (CDBG) program. In addition to funding PA and CDBG, the federal government's recovery and rebuilding assistance also includes payouts from the National Flood Insurance Program as well as funds for levee restoration and repair, coastal wetlands and barrier islands restoration, and benefits provided through Gulf Opportunity Zone tax expenditures. As states and localities continue to rebuild, there are difficult policy decisions that will confront Congress about the federal government's continued contribution to the rebuilding effort and the role it might play over the long-term in an era of competing priorities. GAO's ongoing and preliminary work on Gulf Coast rebuilding suggests the following questions: How much could it ultimately cost to rebuild the Gulf Coast and how much of this cost should the federal government bear? How effective are current funding delivery mechanisms--such as PA and CDBG--and should they be modified or supplemented by other mechanisms? What options exist to effectively build in federal oversight to accompany the receipt of federal funds, particularly as federal funding has shifted from emergency response to rebuilding? How can the federal government further partner with state and local governments and the nonprofit and private sectors to leverage public investment in rebuilding? What are the "lessons learned" from the Gulf Coast hurricanes, and what changes need to be made to help ensure a more timely and effective rebuilding effort in the future?
The nation’s transportation system is a vast, interconnected network of diverse modes. Key modes of transportation include aviation; highways; motor carrier (i.e., trucking); motor coach (i.e., intercity bus); maritime; pipeline; rail (passenger and freight); and transit (e.g., buses, subways, ferry boats, and light rail). The transportation modes work in harmony to facilitate mobility through an extensive network of infrastructure and operators, as well as through the vehicles and vessels that permit passengers and freight to move within the system. For example, the nation’s transportation system moves over 30 million tons of freight and provides approximately 1.1 billion passenger trips each day. The diversity and size of the transportation system make it vital to our economy and national security, including military mobilization and deployment. Private industry, state and local governments, and the federal government all have roles and responsibilities in securing the transportation system. Private industry owns and operates a large share of the transportation system. For example, almost 2,000 pipeline companies and 571 railroad companies own and operate the pipeline and freight railroad systems, respectively. Additionally, 83 passenger air carriers and 640,000 interstate motor coach and motor carrier companies operate in the United States. State and local governments also own significant portions of the highways, transit systems, and airports in the country. For example, state and local governments own over 90 percent of the total mileage of highways. State and local governments also administer and implement regulations for different sectors of the transportation system and provide protective and emergency response services through various agencies. Although the federal government owns a limited share of the transportation system, it issues regulations, establishes policies, provides funding, and/or sets standards for the different modes of transportation. The federal government uses a variety of policy tools, including grants, loan guarantees, tax incentives, regulations, and partnerships, to motivate or mandate state and local governments or the private sector to help address security concerns. Prior to September 11, DOT was the primary federal entity involved in transportation security matters. However, in response to the attacks on September 11, Congress passed the Aviation and Transportation Security Act (ATSA), which created TSA within DOT and defined its primary responsibility as ensuring security in all modes of transportation. The act also gives TSA regulatory authority over all transportation modes. Since its creation in November 2001, TSA has focused primarily on meeting the aviation security deadlines contained in ATSA. With the passage of the Homeland Security Act on November 25, 2002, TSA, along with over 20 other agencies, was transferred to the new Department of Homeland Security (DHS). The United States maintains the world’s largest and most complex national transportation system. Improving the security of such a system is fraught with challenges for both public and private entities. To provide safe transportation for the nation, these entities must overcome issues common to all modes of transportation as well as issues specific to the individual modes of transportation. Although each mode of transportation is unique, they all face some common challenges in trying to enhance security. Common challenges stem from the extensiveness of the transportation system, the interconnectivity of the system, funding security improvements, and the number of stakeholders involved in transportation security. The size of the transportation system makes it difficult to adequately secure. The transportation system’s extensive infrastructure crisscrosses the nation and extends beyond our borders to move millions of passengers and tons of freight each day. The extensiveness of the infrastructure as well as the sheer volume of freight and passengers moved through the system creates an infinite number of targets for terrorists. Furthermore, as industry representatives and transportation security experts repeatedly noted, the extensiveness of the infrastructure makes equal protection for all assets impossible. Protecting transportation assets from attack is made more difficult because of the tremendous variety of transportation operators. Some are multibillion-dollar enterprises, and others have very limited facilities and very little traffic. Some are public agencies, such as state departments of transportation, and some are private businesses. Some transportation operators carry passengers, and others haul freight. Additionally, the type of freight moved through the different modes is similarly varied. For example, the maritime, motor carrier, and rail operators haul freight as diverse as dry bulk (grain) and hazardous materials. Additional challenges are created by the interconnectivity and interdependency among the transportation modes and between the transportation sector and nearly every other sector of the economy. The transportation system is interconnected or intermodal because passengers and freight can use multiple modes of transportation to reach a destination. For example, from its point of origin to its destination, a piece of freight, such as a shipping container, can move from ship to train to truck. (See fig. 1.) The interconnective nature of the transportation system creates several security challenges. First, the effects of events directed at one mode of transportation can ripple throughout the entire system. For example, when the port workers in California, Oregon, and Washington went on strike in 2002, the railroads saw their intermodal traffic decline by almost 30 percent during the first week of the strike, compared with the year before. Second, the interconnecting modes can contaminate each other—that is, if a particular mode experiences a security breach, the breach could affect other modes. An example of this would be if a shipping container that held a weapon of mass destruction arrived at a U.S. port where it was placed on a truck or train. In this case, although the original security breach occurred in the port, the rail or trucking industry would be affected as well. Thus, even if operators within one mode established high levels of security they could be affected because of the security efforts, or lack thereof, of the other modes. Third, intermodal facilities where a number of modes connect and interact—such as ports— are potential targets for attack because of the presence of passengers, freight, employees, and equipment at these facilities. Interdependencies also exist between transportation and nearly every other sector of the economy. Consequently, an event that affects the transportation sector can have serious impacts on other industries. For example, when the war in Afghanistan began in October 2001, the rail industry restricted the movement of many hazardous materials, including chlorine, because of a heightened threat of a terrorist attack. However, within days, many major water treatment facilities reported that they were running out of chlorine, which they use to treat drinking water, and would have to shut down operations if chlorine deliveries were not immediately resumed. Securing the transportation system is made more difficult because of the number of stakeholders involved. As illustrated in figure 2, numerous entities at the federal, state, and local levels, including over 20 federal entities and thousands of private sector businesses, play a key role in transportation security. For example, the Departments of Energy, Transportation, and Homeland Security; state governments; and about 2,000 pipeline operators are all responsible for securing the pipeline system. The number of stakeholders involved in transportation security can lead to communication challenges, duplication, and conflicting guidance. Representatives from several state and local government and industry associations told us that their members are receiving different messages from the various federal agencies involved in transportation security. For instance, one industry representative noted that both TSA and DOT asked the industry to implement additional security measures when the nation’s threat condition was elevated to orange at the beginning of the Iraq War; however, TSA and DOT were not consistent in what they wanted done—that is, they were asking for different security measures. Moreover, many representatives commented that the federal government needs to better coordinate its security efforts. These representatives noted that dealing with multiple agencies on the same issues and topics is frustrating and time consuming for the transportation sector. The number of stakeholders also makes it difficult to achieve the needed cooperation and consensus to move forward with security efforts. As we have noted in past reports, coordination and consensus-building are critical to successful implementation of security efforts. Transportation stakeholders can have inconsistent goals or interests, which can make consensus-building challenging. For example, from a safety perspective, vehicles that carry hazardous materials should be required to have placards that identify the contents of a vehicle so that emergency personnel know how best to respond to an incident. However, from a security perspective, identifying placards on vehicles that carry hazardous materials make them a potential target for attack. According to transportation security experts and state and local government and industry representatives we contacted, funding is the most pressing challenge to securing the nation’s transportation system. Although some security improvements are inexpensive, such as removing trash cans from subway platforms, most require substantial funding. Additionally, given the large number of assets to protect, the sum of even relatively less expensive investments can be cost prohibitive. For example, reinforcing shipping containers to make them more blast resistant is one way to improve security, which would cost about $15,000 per container. With several million shipping containers in use, however, this tactic would cost billions of dollars if all of them were reinforced. The total cost of enhancing the security of the entire transportation system is unknown; however, given the size of the system, it could amount to tens of billions of dollars. The current economic environment makes this a difficult time for private industry or state and local governments to make security investments. According to industry representatives and experts we contacted, most of the transportation industry operates on a very thin profit margin, making it difficult for the industry to pay for additional security measures. The sluggish economy has further weakened the transportation industry’s financial condition by decreasing ridership and revenues. For example, airlines are in the worst fiscal crisis in their history, and several have filed for bankruptcy. Similarly, the motor coach and motor carrier industries and Amtrak report decreased revenues because of the slow economy. In addition, nearly every state and local government is facing a large budget deficit for fiscal year 2004. For example, the National Governors Association estimates that states are facing a total budget shortfall of $80 billion for fiscal year 2004. Given the tight budget environment, state and local governments and transportation operators must make difficult trade- offs between transportation security investments and other needs, such as service expansion and equipment upgrades. According to the National Association of Counties, many local governments are planning to defer some maintenance of their transportation infrastructure to pay for some security enhancements. Further exacerbating the problem of funding security improvements is the additional costs the transportation sector incurs when the federal government elevates the national threat condition. Industry representatives stated that operators tighten security, such as increasing security patrols, when the national threat condition is raised or intelligence information suggests an increased threat against their mode. However, these representatives stated that these additional measures drain resources and are not sustainable. For example, Amtrak estimates that it spends an additional $500,000 per month for police overtime when the national threat condition is increased. Transportation industry representatives also noted that employees are diverted from their regular duties to implement additional security measures, such as guarding entranceways, in times of increased security, which hurts productivity. The federal government has provided additional funding for transportation security since September 11, but demand has far outstripped the additional amounts made available. For example, Congress appropriated a total of $241 million for grants for ports, motor carriers, and Operation Safe Commerce in 2002. However, as table 1 shows, the grant applications TSA has received for these security grants totaled $1.8 billion—nearly 8 times more than the amount available. Due to the costs of security enhancements and the transportation industries’ and state and local governments’ tight budget environments, the federal government is likely to be viewed as a source of funding for at least some of these enhancements. However, given the constraints on the federal budget as well as competing claims for federal assistance, requests for federal funding for transportation security enhancements will likely continue to exceed available resources. Another challenge is balancing the potential economic impacts of security enhancements with the benefits of such measures. Although there is broad support for greater security, this task is a difficult one because the nation relies heavily on a free and expeditious flow of goods. Particularly with “just-in-time” deliveries, which require a smooth and expeditious flow through the transportation system, delays or disruptions in the supply chain could have serious economic impacts. As the Coast Guard Commandant stated about the flow of goods through ports, “even slowing the flow long enough to inspect either all or a statistically significant random selection of imports would be economically intolerable.” Furthermore, security measures may have economic and competitive ramifications for individual modes of transportation. For instance, if the federal government imposed a particular security requirement on the rail industry and not on the motor carrier industry, the rail industry might incur additional costs and/or lose customers to the motor carrier industry. Striking the right balance between increasing security and protecting the economic vitality of the national economy and individual modes will remain an important and difficult task. In addition to the overarching challenges that transportation stakeholders will face in attempting to improve transportation security, they also face a number of challenges specific to the aviation, maritime, and land transportation modes. Although aviation security has received a significant amount of attention and funding since September 11, more work is needed. In general, transportation security experts believe that the aviation system is more secure today than it was prior to September 11. However, aviation experts and TSA officials noted that significant vulnerabilities remain. For example: Perimeter security: Terrorists could launch attacks, such as launching shoulder-fired missiles, from a location just outside an airport’s perimeter. Since September 11, airport operators have increased their patrols of airport perimeter areas, but industry officials state that they do not have enough resources to completely protect against these attacks. Air cargo security: Although TSA has focused much effort and funding on ensuring that bombs and other threat items are not carried onto planes by passengers or in their luggage, vulnerabilities exist in securing the cargo carried aboard commercial passenger and all-cargo aircraft. For example, employees of shippers and freight forwarders are not universally subject to background checks. Theft is also a major problem in air cargo shipping, signifying that unauthorized personnel may still be gaining access to air cargo shipments. Air cargo shipments pass through several hands in going from sender to recipient, making it challenging to implement a system that provides adequate security for air cargo. According to TSA officials, TSA is developing a strategic plan to address air cargo security and has undertaken a comprehensive outreach process to strengthen security programs across the industry. General aviation security: Although TSA has taken several actions related to general aviation since September 11, this segment of the industry remains potentially more vulnerable than commercial aviation. For example, general aviation pilots are not screened prior to taking off, and the contents of a plane are not examined at any point. According to TSA, solutions that can be implemented relatively easily at the nation’s commercial airports are not practical at the 19,000 general aviation airports. It would be very difficult to prevent a general aviation pilot intent on committing a terrorist attack with his or her aircraft from doing so. The vulnerability of the system was illustrated in January 2002, when a teenage flight student from Florida crashed his single-engine airplane into a Tampa skyscraper. TSA is working with the appropriate stakeholders to close potential security gaps and to raise the security standards across this diverse segment of the aviation industry. Maritime and land transportation systems have their own unique security vulnerabilities. For example, maritime and land transportation systems generally have an open design, meaning the users can access the system at multiple points. The systems are open by design so that they are accessible and convenient for users. In contrast, the aviation system is housed in closed and controlled locations with few entry points. The openness of the maritime and land transportation systems can leave them vulnerable because transportation operators cannot monitor or control who enters or leaves the systems. However, adding security measures that restrict the flow of passengers or freight through the systems could have serious consequences for commerce and the public. Individual maritime and land transportation modes also have unique challenges and vulnerabilities. For example, representatives from the motor carrier industry noted that the high turnover rate (about 40 to 60 percent) of drivers means that motor carrier operators must be continually conducting background checks on new drivers, which is expensive and time consuming. Additionally, as we noted in our report on rail safety and security, the temporary storage of hazardous materials in unsecured or unmonitored rail cars while awaiting delivery to their ultimate destinations is a potential vulnerability. Specifically, unmonitored chemical cars could develop undetected leaks that could threaten the nearby population and environment. In addition, representatives from the motor coach industry commented that the number of used motor coaches on the market, coupled with the lack of guidance or requirements on buying or selling these vehicles, is a serious vulnerability. In particular, there are approximately 5,000 used motor coaches on the market; however, there is very little information on who is selling and buying them, nor is there any consistency among motor coach operators in whether they remove their logos from the vehicles before they are sold. These vehicles could be used as weapons or to transport weapons. Federal Motor Carrier Safety Administration officials told us they have not issued guidance to the industry on this potential vulnerability because TSA is responsible for security and therefore would be responsible for issuing such guidance. Since September 11, transportation operators and state and local governments have been working to strengthen security, according to associations we contacted. Although security was a priority before September 11, the terrorist attacks elevated the importance and urgency of transportation security for transportation operators and state and local governments. According to representatives from a number of industry associations we interviewed, transportation operators have implemented new security measures or increased the frequency or intensity of existing activities. Some of the most common measures cited include conducting vulnerability or risk assessments, tightening access control, intensifying security presence, increasing emergency drills, developing or revising security plans, and providing additional training. (Figure 3 is a photograph from an annual emergency drill conducted by the Washington Metropolitan Area Transit Authority.) As we have previously reported, state and local governments are critical stakeholders in the nation’s homeland security efforts. This is equally true in securing the nation’s transportation system. State and local governments play a critical role, in part, because they own a significant portion of the transportation infrastructure, such as airports, transit systems, highways, and ports. For example, state and local governments own over 90 percent of the total mileage of the highway system. Even when state and local governments are not the owners or operators, they nonetheless are directly affected by the transportation modes that run through their jurisdictions. Consequently, the responsibility for protecting this infrastructure and responding to emergencies involving the transportation infrastructure often falls on state and local governments. Security efforts of local and state governments have included developing counter terrorist plans, participating in training and security-related research, participating in transportation operators’ emergency drills and table-top exercises, conducting vulnerability assessments of transportation assets, and participating in emergency planning sessions with transportation operators. Some state and local governments have also hired additional law enforcement personnel to patrol transportation assets. Much of the funding for these efforts has been covered by the state and local governments, with a bulk of the expenses going to personnel costs, such as for additional law enforcement officers and overtime. Congress, DOT, TSA, and other federal agencies have taken numerous steps to enhance transportation security since September 11. The roles of the federal agencies in securing the nation’s transportation system, however, are in transition. Prior to September 11, DOT had primary responsibility for the security of the transportation system. In the wake of September 11, Congress created TSA and gave it responsibility for the security of all modes of transportation. However, DOT and TSA have not yet formally defined their roles and responsibilities in securing all modes of transportation. Furthermore, TSA is moving forward with plans to enhance transportation security. For example, TSA plans to issue security standards for all modes. DOT modal administrations are also continuing their security efforts for different modes of transportation. Congress has acted to enhance the security of the nation’s transportation system since September 11. In addition to passing the Aviation and Transportation Security Act (ATSA), Congress passed a number of other key pieces of legislation aimed at improving transportation security. For example, Congress passed the USA PATRIOT Act of 2001, which mandates federal background checks of individuals operating vehicles carrying hazardous materials; and the Homeland Security Act, which created DHS and moved TSA to the new department. Congress also provided funding for transportation security enhancements through various appropriations acts. For example, the 2002 Supplemental Appropriations Act, in part, provided (1) $738 million for the installation of explosives detection systems in commercial service airports, (2) $125 million for port security activities, and (3) $15 million to enhance the security of intercity bus operations. Federal agencies, notably TSA and DOT, have also taken steps to enhance transportation security since September 11. In its first year of existence, TSA worked to establish its organization and focused primarily on meeting the aviation security deadlines contained in ATSA. In January 2002, TSA had 13 employees to tackle securing the nation’s transportation system; 1 year later, TSA had about 65,000 employees. TSA reports that it met over 30 deadlines during 2002 to improve aviation security, including two of its most significant deadlines—to deploy federal passenger screeners at airports across the nation by November 19, 2002; and to screen every piece of checked baggage for explosives by December 31, 2002. According to TSA, other completed TSA activities included recruiting, hiring, training, and deploying about 56,000 federal screeners; awarding grants for port security; and implementing performance management system and strategic planning activities to create a results-oriented culture. As TSA worked to establish itself and improve the security of the aviation system, DOT modal administrations acted to enhance the security of air, land, and maritime transportation. (See app. I for a table listing the actions taken by DOT modal administrations since September 11.) The actions taken by the DOT modal administrations have varied. For example, FTA launched a multipart initiative for mass transit agencies that provided grants for emergency drills, offered free security training, conducted security assessments at 36 transit agencies, provided technical assistance, and invested in research and development. The Federal Motor Carrier Safety Administration developed three courses for motor coach drivers. The responses of the various DOT modal agencies have varied due to differences in authority and resource limitations. In addition to TSA and DOT modal administrations, other federal agencies have also taken actions to improve security. For example, the Bureau of Customs and Border Protection (CBP), previously known as the U.S. Customs Service, has launched a number of initiatives aimed at strengthening the security of the U.S. border. Some of the specific security initiatives that CBP has implemented include establishing the Customs Trade Partnership Against Terrorism (C-TPAT), which is a joint government business initiative aimed at securing the supply chain of global trade against terrorist exploitation; and launching the Container Security Initiative (CSI), which is designed specifically to secure ocean- going sea containers. In addition, CBP has developed and/or deployed tools to detect weapons of mass destruction in cargo containers and vehicles, such as the new mobile gamma ray imaging devices pictured in figure 4. TSA is moving forward with efforts to secure the entire transportation system. TSA has adopted a systems approach—that is, a holistic rather than a modal approach—to securing the transportation system. In addition, TSA is using risk management principles to guide its decision- making. TSA is also planning to establish security standards for all modes of transportation and is launching a number of new security efforts for the maritime and land transportation modes. Using the systems approach, TSA plans to address the security of the entire transportation system as a whole, rather than focusing on individual modes of transportation. According to TSA officials, using a systems approach to security is appropriate for several reasons. First, the transportation system is intermodal, interdependent, and international. Given the intermodalism of the system, incidents in one mode of transportation could affect other modes. Second, it is important not to drive terrorism from one mode of transportation to another mode because of perceived lesser security—that is, make a mode of transportation a more attractive target because another mode is “hardened” with additional security measures. Third, it is important that security measures for one mode of transportation are not overly stringent or too economically challenging compared with the measures used for other modes. Fourth, it is important that the attention on one aspect of transportation security (e.g., cargo, infrastructure, or passengers) does not leave the other aspects vulnerable. TSA has also adopted a risk management approach for its efforts to enhance the security of the nation’s transportation system. A risk management approach is a systematic process to analyze threats, vulnerabilities, and the criticality (or relative importance) of assets to better support key decisions in order to link resources with prioritized efforts. (See app. II for a description of the key elements of a risk management approach.) The highest priorities emerge where the three elements of risk management overlap. For example, transportation infrastructure that is determined to be a critical asset, vulnerable to attack, and a likely target would be most at risk and therefore would be a higher priority for funding compared with infrastructure that was only vulnerable to attack. According to TSA officials, risk management principles will drive all decisions—from standard-setting to funding priorities to staffing. Using risk management principles to guide decision-making is a good strategy, given the difficult trade-offs TSA will likely have to make as it moves forward with its security efforts. We have advocated using a risk management approach to guide federal programs and responses to better prepare against terrorism and other threats and to better direct finite national resources to areas of highest priority. As representatives from local government and industry associations and transportation security experts repeatedly noted, the size of the transportation system precludes equal protection for all assets; moreover, the risks vary by transportation assets within modes and by modes. In addition, requests for funding for transportation security enhancements will likely exceed available resources. Risk management principles can help TSA determine security priorities and identify appropriate solutions. TSA plans to issue national security standards for all modes of transportation. The federal government has historically set security standards for the aviation sector. For instance, prior to the passage of ATSA, FAA set security standards that the airlines were required to follow in several areas including, screening equipment, screener qualifications, and access control systems. In contrast, prior to the September 11 attacks, limited statutory authority existed to require measures to ensure the security of the maritime and land transportation systems. According to a TSA report, the existing regulatory framework leaves the maritime and land transportation systems unacceptably vulnerable to terrorist attack. For example, the rail, transit, and motor coach transportation systems are subject to no mandatory security requirements, resulting in little or no screening of passengers, baggage, or crew. Additionally, seaborne passenger vessel and seaport terminal operators have inconsistent levels and methods of screening and are largely free to set their own rules about the hiring and training of security personnel. Hence, TSA will set standards to ensure consistency among modes and across the transportation system and to reduce the transportation system’s vulnerability to attacks. According to TSA officials and documents, TSA’s standards will be performance-, risk-, and threat-based and may be mandatory. More specifically: Standards will be performance-based. Rather than being prescriptive standards, TSA standards will be performance-based, which will allow transportation operators to determine how best to achieve the desired level of security. TSA officials believe that performance-based standards provide for operator flexibility, allow operators to use their professional judgment in enhancing security, and encourage technology advancement. Standards will be risk-based. Standards will be set for areas for which assessments of the threats, vulnerabilities, and criticality indicate that an attack would have a national impact. A number of factors could be considered in determining “national impact,” such as fatalities and economic damage. Standards will be threat-based. The standards will be tied to the national threat condition and/or local threats. As the threat condition escalates, the standards will require transportation operators to implement additional countermeasures. Standards may be mandatory. The standards will be mandatory when the risk level is too high or unacceptable. TSA officials stated that in these cases, mandatory standards are needed to ensure accountability. In addition, according to TSA officials, voluntary requirements put security- conscious transportation operators that implement security measures at a competitive disadvantage—that is, they have spent money that their competitors may not have spent. This creates a disincentive for transportation operators to implement voluntary requirements. TSA officials believe that mandatory standards will reduce this problem. In determining whether mandatory standards are needed, TSA will review the results of criticality and vulnerability assessments, current best practices, and voluntary compliance opportunities in conjunction with the private sector and other government agencies. Although TSA officials expect some level of resistance to the standards by the transportation industry, they believe that their approach of using risk-, threat-, and performance-based standards will increase the acceptance of the standards. For example, performance-based standards allow for more operator flexibility in implementing the standards, compared with rigid, prescriptive standards. Moreover, TSA plans to issue only a limited number of standards—that is, standards will be issued only when assessments of the threats, vulnerabilities, and criticality indicate that the level of risk is too high or unacceptable. TSA also expects some level of resistance to the standards from DOT modal administrations. Although TSA will establish the security standards, TSA expects that they will be administered and implemented by existing agencies and organizations. DOT modal administrations may be reluctant to assume this role because doing so could alter their relationships with the industry. Historically, the missions of DOT surface transportation modal administrations have largely focused on maintaining operations and improving service and safety, not regulating security. Moreover, the authority to regulate security varies by DOT modal administration. For example, FTA has limited authority to regulate and oversee security at transit agencies. In contrast, FRA has regulatory authority for rail security, and DOT’s Office of Pipeline Safety has responsibility for writing safety and security regulations for liquefied natural gas storage facilities. In addition, DOT modal administrations may be reluctant to administer and implement standards because of resource concerns. FHWA officials commented that given the current uncertainty about the standards and their impacts, FHWA is reluctant to commit, in advance, staff or funding to enforce new security standards. Because transportation stakeholders will be involved in administering, implementing, and/or enforcing TSA standards, stakeholder buy-in is critical to the success of this initiative. Compromise and consensus on the part of stakeholders are also necessary. However, achieving such consensus and compromise may be difficult, given the conflicts between some stakeholders’ goals and interests. Transportation stakeholders we contacted also expressed a number of concerns about TSA’s plan to issue security standards for all modes of transportation. For example, industry associations expressed concerns that the standards would come in the form of unfunded mandates—that is, the federal government would not provide funding to implement mandatory standards. According to the industry and state and local government associations we spoke to, unfunded mandates create additional financial burdens for transportation operators, who are already experiencing financial difficulties. Industry representatives also expressed concern that TSA has not adequately included the transportation industry in its development of standards. Many industry representatives and some DOT officials we met with were unsure of whether TSA was issuing standards, what the standards would entail, or the time frames for issuing the standards. The uncertainty about the pending standards can lead to confusion and/or inaction. For example, Amtrak officials noted that they are reluctant to spend money to implement certain security measures because they are worried that TSA will subsequently issue standards that will require Amtrak to redo its efforts. Transportation stakeholders also raised other concerns about TSA’s plans to issues standards, including questioning whether TSA has the necessary expertise to develop appropriate standards and whether mandatory standards, as opposed to voluntary standards, are prudent. TSA is also working on a number of additional security efforts, such as establishing the Transportation Workers Identification Card (TWIC) program; developing the next generation of the Computer Assisted Passenger Pre-Screening System; developing a national transportation system security plan; and exploring methods to integrate operations and security, among other things. The TWIC program is intended to improve access control for the 12 million transportation workers who require unescorted physical or cyber access to secure areas of the nation’s transportation modes by establishing a uniform, nationwide standard for secure identification of transportation workers. Specifically, TWIC will combine standard background checks and biometrics so that a worker can be positively matched to his/her credential. Once the program is fully operational, the TWIC would be the standard credential for transportation workers and would be accepted by all modes of transportation. According to TSA, developing a uniform, nationwide standard for identification will minimize redundant credentialing and background checks. As TSA moves forward with new security initiatives, DOT modal administrations are also continuing their security efforts and, in some cases, launching new security initiatives. For example, FHWA is coordinating a series of workshops this year on emergency response and preparedness for state departments of transportation and other agencies. FTA also has a number of initiatives currently under way in the areas of public awareness, research, training, technical assistance, and intelligence sharing. For example, FTA developed a list of the top 20 security actions transit agencies should implement and is currently working with transit agencies to assist them in implementing these measures. FAA is also continuing its efforts to enhance cyber security in the aviation system. Although the primary responsibility for securing the aviation system was transferred to TSA, FAA remains responsible for protecting the nation’s air traffic control system—both the physical security of its air traffic control facilities and computer systems. The air traffic control system’s computers help the nation’s air traffic controllers to safely direct and separate traffic—sabotaging this system could have disastrous consequences. FAA is moving forward with efforts to increase the physical security of its air traffic control facilities and ensure that contractors who have access to the air traffic control system undergo background checks. The roles and responsibilities of TSA and DOT in transportation security have yet to be clearly delineated, which creates the potential for duplicating or conflicting efforts as both entities move forward with their security efforts. DOT modal administrations were primarily responsible for the security of the transportation system prior to September 11. In November 2001, Congress passed ATSA, which created TSA and gave it primary responsibility for securing all modes of transportation. However, during TSA’s first year of existence, TSA’s main focus was on aviation security—more specifically, on meeting ATSA deadlines. While TSA was primarily focusing on aviation security, DOT modal administrations launched various initiatives to enhance the security of the maritime and land transportation modes. With the immediate crisis of meeting many aviation security deadlines behind it, TSA has been able to focus more on the security of all modes of transportation. Legislation has not specifically defined TSA’s role and responsibilities in securing all modes of transportation. In particular, ATSA does not specify TSA’s role and responsibilities in securing the maritime and land transportation modes in detail as it does for aviation security. For instance, the act does not set deadlines for TSA to implement certain transit security requirements. Instead, the act simply states that TSA is responsible for ensuring security in all modes of transportation. The act also did not eliminate the existing statutory responsibilities for DOT modal administrations to secure the different transportation modes. Moreover, recent legislation indicates that DOT still has security responsibilities. In particular, the Homeland Security Act of 2002 states that the Secretary of Transportation is responsible for the security as well as the safety of rail and the transport of hazardous materials by all modes. To clarify their roles and responsibilities in transportation security, DOT modal administrations and TSA planned to develop memorandums of agreement. The purpose of these documents was to define the roles and responsibilities of the different agencies for transportation security and address a variety of issues, including separating safety and security activities, interfacing with the transportation industry, and establishing funding priorities. TSA and the DOT modal administrations worked for months to develop the memorandums of agreement and the draft agreements were presented to senior DOT and TSA management for review in early spring of this year. According to DOT’s General Counsel, with the exception of the memorandum of agreement between FAA and TSA, the draft memorandums were very general and did not provide much clarification. Consequently, DOT and TSA decided not to sign the memorandums of agreement, except for the memorandum of agreement between FAA and TSA, which was signed on February 28, 2003. The General Counsel suggested several reasons why the majority of the draft memorandums of agreement were too general. First, as TSA’s departure date approached—that is, the date that TSA transferred from DOT to DHS—TSA and DOT modal administration officials may have grown concerned about formally binding the organizations to specific roles and responsibilities. Second, the working relationships between TSA and most of the DOT modal administrations are still very new; as a result, all of the potential issues, problem areas, or overlap have yet to be identified. Thus, identifying items to include in the memorandums of agreement was more difficult. Rather than execute memorandums of agreement, the Secretary of Transportation and the Administrator of TSA exchanged correspondence that commits each entity to continued coordination and collaboration on security measures. In the correspondence, the Secretary and Administrator also agreed to use the memorandum of agreement between TSA and FAA as a framework for their interactions on security matters for all other modes. TSA and DOT officials stated that they believe memorandums of agreement are a good strategy for delineating roles and responsibilities and said that they would be open to using memorandums of agreement in the future. Transportation security experts and representatives of state and local government and industry associations we contacted generally believe that the transportation system is more secure today than it was prior to September 11. Transportation stakeholders have worked hard to strengthen the security of the system. Nevertheless, transportation experts, industry representatives, and federal officials all recommend that more work be done. Transportation experts and state and local government and industry representatives identified a number of actions that, in their view, the federal government should take to enhance security, including clarifying federal roles and coordinating federal efforts, developing a transportation security strategy, funding security enhancements, investing in research and development, and providing better intelligence information and related guidance. Specifically: Clarify federal roles and responsibilities. The lack of clarity about the roles and responsibilities of federal entities in transportation security creates the potential for confusion, duplication, and conflicts. Understanding roles, responsibilities, and whom to call is crucial in an emergency. However, representatives from several industry associations stated that their members were unclear about which agency to contact for their various security concerns and which agency has oversight for certain issues. Furthermore, they said that they do not have contacts within these agencies. As mentioned earlier, several industry representatives reported that their members are receiving different messages from various federal agencies involved in transportation security, which creates confusion and frustration within the industry. According to industry representatives and transportation security experts, uncertainty about federal roles and the lack of coordination are straining intergovernmental relationships, draining resources, and raising the potential for problems in responding to terrorism. One industry association told us, for instance, that it has been asked by three different federal agencies to participate in three separate studies of the same issue. Establish a national transportation strategy. A national strategy is crucial for helping stakeholders identify priorities, leveraging resources, establishing stakeholder performance expectations, and creating incentives for stakeholders to improve security. Currently, local government associations view the absence of performance expectations— coupled with limited threat information—as a major obstacle in focusing their people and resources on high-priority threats, particularly at elevated threat levels. The experts also noted that modal strategies—no matter how complete—cannot address the complete transportation security problem and will leave gaps in preparedness. As mentioned earlier, TSA is in the process of developing a national transportation system security plan, which, according to the Deputy Administrator of TSA, will provide an overarching framework for the security of all modes. Provide funding for needed security improvements. Although an overall security strategy is a prerequisite to investing wisely, providing adequate funding also is essential, according to experts we contacted. Setting security goals and strategies without adequate funding diminishes stakeholders’ commitment and willingness to absorb initial security investments and long-term operating costs, an expert emphasized. Industry and state and local government associations also commented that federal funding should accompany any federal security standards; otherwise, mandatory standards will be considered unfunded mandates that the industry and state and local governments will have to absorb. Invest in research and development for transportation security. According to most transportation security experts and associations we contacted, investing in research and development is an appropriate role for the federal government, because the products of research and development endeavors would likely benefit the entire transportation system, not just individual modes or operators. TSA is actively engaged in research and development projects, such as the development of the next generation explosive detection systems for baggage, hardening of aircraft and cargo/baggage containers, biometrics and other access control methods, and human factors initiatives to identify methods to improve screener performance, at its Transportation Security Laboratory in Atlantic City, New Jersey. However, TSA noted that continued adequate funding for research and development is paramount in order for TSA to be able to meet security demands with up-to-date and reliable technology. Provide timely intelligence information and related guidance. Representatives from numerous associations commented that the federal government needs to provide timely, localized, actionable intelligence information. They said that general threat warnings are not helpful. Rather, transportation operators want more specific intelligence information so that they can understand the true nature of a potential threat and implement appropriate security measures. Without more localized and actionable intelligence, stakeholders said they run the risk of wasting resources on unneeded security measures or not providing an adequate level of security. Moreover, local government officials often are not allowed to receive specific intelligence information because they do not have appropriate federal security clearances. Also, there is little federal guidance on how local authorities should respond to a specific threat or general threat warnings. For example, San Francisco police were stationed at the Golden Gate Bridge to respond to the elevated national threat condition. However, without information about the nature of the threat to San Francisco’s large transportation infrastructure or clear federal expectations for a response, it is difficult to judge whether actions like this are the most effective use of police protection, according to representatives from a local government association. Securing the transportation system is fraught with challenges. Despite these challenges, transportation stakeholders have worked to strengthen security since September 11. However, more work is needed. It will take the collective effort of all transportation stakeholders to meet the continuing challenges and enhance the security of the transportation system. During TSA’s first year of existence, it met a number of challenges, including successfully meeting many congressional deadlines for aviation security. With the immediate crisis of meeting these deadlines behind it, TSA can now examine the security of the entire transportation system. As TSA becomes more active in securing the maritime and land transportation modes, it will become even more important that the roles of TSA and DOT modal administrations are clearly defined. Lack of clearly defined roles among the federal entities could lead to duplication and confusion. More importantly, it could hamper the transportation sector’s ability to prepare for and respond to attacks. Therefore, in our report, we recommended that the Secretary of Homeland Security and the Secretary of Transportation develop mechanisms, such as a memorandum of agreement, to clearly define the roles and responsibilities of TSA and DOT in transportation security and communicate this information to stakeholders. This concludes my prepared statement. I would be pleased to respond to any questions you or other Members of the Committee may have. For information about this testimony, please contact Peter Guerrero, Director, Physical Infrastructure Issues, on (202) 512-2834. Individuals making key contributions to this testimony included Cathleen Berrick, Steven Calvo, Nikki Clowers, Michelle Dresben, Susan Fleming, Libby Halperin, David Hooper, Hiroshi Ishikawa, and Ray Sendejas. Research and Special Programs Administration (Office of Hazardous Materials Safety) Established regulations for shippers and transporters of certain hazardous materials to develop and implement security plans and to require security awareness training for hazmat employees. Developed hazardous materials transportation security awareness training for law enforcement, the industry, and the hazmat community. Published security advisory, which identifies measures that could enhance the security of the transport of hazardous materials. Investigated the security risks associated with placarding hazardous materials, including whether removing placards from certain shipments improve shipment security, and whether alternative methods for communicating safety hazards could be deployed. Established rule for strengthening cockpit doors on commercial aircraft. Issued guidance to flight school operators for additional security measures. Assisted Department of Justice in increasing background check requirements for foreign nationals seeking pilot certificates. Increased access restrictions at air traffic control facilities. Developed computer security strategy. Provided vulnerability assessment and emergency preparedness workshops. Developed and prioritized list of highway security research and development projects. Convened blue ribbon panel on bridge and tunnel vulnerabilities. Activated and deployed port security units to help support local port security patrols in high threat areas. Boarded and inspected ships to search for threats and confirmed the identity of those aboard. Conducted initial assessments of the nation’s ports to identify vessel types and facilities that pose a high risk of being involved in a transportation security incident. Established a new centralized National Vessel Movement Center to track the movement of all foreign-flagged vessels entering U.S. ports of call. Established new guidelines for developing security plans and implementing security measures for passenger vessels and passenger terminals. Used the pollution and hazardous materials expertise of the Coast Guard’s National Strike Force to prepare for and respond to bioterrorism and weapons of mass destruction. Increased port security and terrorism emphasis at National Port Readiness Network Port Readiness Exercises. Provided port security training and developed standards and curriculum to educate and train maritime security personnel. Increased access restrictions and established new security procedures for the Ready Reserve Force. Provided merchant mariner background checks for Ready Reserve Force and sealift vessels in support of Department of Defense and Coast Guard requirements. Provided merchant mariner force protection training. Conducted 31,000 on-site security sensitivity visits for hazardous materials carriers; made recommendations after visits. Initiated a field operational test to evaluate different safety and security technologies and procedures, and identify the most cost- effective means for protecting different types of hazardous cargo for security purposes. Provided free training on trucks and terrorism to law enforcement officials and industry representatives. Conducted threat assessment of the hazardous materials industry. Developed three courses for drivers on security-related information, including different threats, how to deal with packages, and how to respond in the case of an emergency. Research and Special Programs Administration (Office of Pipeline Safety) Developed contact list of operators who own critical systems. Convened blue ribbon panel with operators, state regulators, and unions to develop a better understanding of the pipeline system and coordinate efforts of the stakeholders. Worked with TSA to develop inspection protocols to use for pipeline operator security inspections. The Office of Pipeline Safety and TSA have begun the inspection of major operators. Created e:mail network of pipeline operators and a call-in telephone number that pipeline operators can use to obtain information. Directed pipeline operators to identify critical facilities and develop security plans for critical facilities that address deterrence, preparedness, and rapid response and recovery from attacks. Worked with industry to develop risk-based security guidance, which is tied to national threat levels and includes voluntary, recommended countermeasures. Shared threat information with railroads and rail labor. Reviewed Association of American Railroads’ and Amtrak’s security plans. Assisted commuter railroads with their security plans. Provided funding for security assessments of three commuter railroads, which were included in FTA’s assessment efforts. Reached out to international community for lessons learned in rail security. Awarded $3.4 million in grants to over 80 transit agencies for emergency response drills. Offered free security training to transit agencies. Conducted security assessments at the 36 largest transit agencies. Provided technical assistance to 19, with a goal of 60, transit agencies on security and emergency plans and emergency response drills. Increased funding for security research and development efforts. The U.S. Coast Guard was transferred to DHS in the Homeland Security Act of 2002 (P.L. No. 107- 296, 116 Stat. 2135 (2002)). A risk management approach encompasses three key elements—a threat assessment, vulnerability assessment, and criticality assessment. In particular, these three elements provide the following information: A threat assessment identifies and evaluates potential threats on the basis of such factors as capabilities, intentions, and past activities. This assessment represents a systematic approach to identifying potential threats before they materialize. However, even if updated often, a threat assessment might not adequately capture some emerging threats. The risk management approach, therefore, uses vulnerability and critical assessments as additional input to the decision-making process. A vulnerability assessment identifies weaknesses that may be exploited by identified threats and suggests options to address those weaknesses. A criticality assessment evaluates and prioritizes assets and functions in terms of specific criteria, such as their importance to public safety and the economy. The assessment provides a basis for identifying which structures or processes are relatively more important to protect from attack. Thus, it helps managers determine operational requirements and target resources to the highest priorities while reducing the potential for targeting resources to lower priorities. Key Questions: 1) What are the status and associated costs of TSA efforts to acquire, install, and operate explosive detection equipment (Electronic Trace Detection Technology and Explosive Detection Systems) to screen all checked baggage by December 31, 2003? 2) What are the benefit and tradeoffs—to include costs, operations and performance—of using alternative explosive detection technologies currently available for baggage screening? Key Questions: 1) How has security concerns and measures at changed at general aviation airports since September 11, 2001? 2) What steps has the Transportation Security Administration taken to improve general aviation security? Key Questions: What are procedures for conducting background and security checks for pilots of small banner-towing aircraft requesting waivers to perform stadium overflights? (2) To what extent were these procedures followed in conducting required background and security checks since 9/11? (3) How effective were these procedures in reducing risks to public safety? Key Questions: (1) What are the levels of effort for USCG’s various missions? (2) What is USCG’s progress in developing a strategic plan for setting goals for all of its various missions? (3) What is USCG’s mission performance as compared to its performance and strategic plans? Key Questions: 1) How will the CAPPS-II system function and what data will be needed to make the system operationally effective? 2) What safeguards will be put in place to protect the traveling public’s privacy? 3) What systems and measures are in place to determine whether CAPPS-II will result in improved national security? 4) What impact will CAPPS-II have on the traveling public and airline industry in terms of costs, delays, risks, and hassle, etc.? Key Questions: 1) What efforts have been taken or planned to ensure passenger screeners comply with federal standards and other criteria, to include efforts to train, equip, and supervise passenger screeners? 2) What methods does TSA use to test screener performance, and what have been the results of these tests? 3) How have the results of tests of TSA passenger screeners compared to the results achieved by screeners prior to 9/11 and at the 5 pilot program airports? 4) What actions are TSA taking to remedy performance concerns? Key Questions: (1) To what extent does TSA follow applicable acquisition laws and policies, including ensuring adequate competition? (2) How well does TSA’s organizational structure facilitate effective, efficient procurement? (3) How does TSA ensure that its acquisition workforce is equipped to award and oversee contracts? (4) How well do TSA’s policies and processes ensure that it receives the supplies and services it needs on time and at reasonable cost? Key Questions: (1) What is the status of TSA’s efforts to implement section 106 of the Act requiring improved airport perimeter access security? (2) What is the status of TSA’s efforts to implement section 136 requiring assessment and deployment of commercially available security practices and technologies? (3) What is the status of TSA’s efforts to implement section 138 requiring background investigations for TSA and other airport employees? Key Questions: 1) How effectively is the port vulnerability assessment process being implemented, and what actions are being taken to address deficiencies identified? 2) What progress is being made to develop port, vessel, and facility security plans? 3) Does the CG have sufficient resources and an action plan to ensure the plans be completed, reviewed and approved in time to meet statutory deadlines? 4) What will it cost stakeholders to comply? Key Questions: 1) What is the nature and extent of the threat from MANPADs? 2) How effective are U.S. controls on the use of exported MANPADs? 3) How do multilateral efforts attempt to stem MANPAD proliferation? 4) What types of countermeasures are available to minimize this threat and at what cost? Key Questions: (1) What is the nature, scope, and operational framework of the designee program? (2) What are the identified strengths and weaknesses of the program? (3) What is the potential for FAA’s ODA proposal and other stakeholders’ alternatives to address the identified program weaknesses? Key Questions: (1) How has Customs developed the Automated Targeting System (ATS) and the new anti-terrorism rules? (2) How does Customs use ATS to identify containerized cargo as “high risk” for screening and inspection to detect cargo that might contain weapons of mass destruction (WMD)? (3) To what extent is ATS implemented at seaports, including impact and challenges involved? (4) What is Customs’ plan for assessing system implementation and performance? Key Questions: 1) What are the current and emerging national challenges to freight mobility and what proposals have been put forth to address these issues? 2) To what extent do these current and emerging challenges exist at container ports and surrounding areas and to what extent do the proposals appear to have applicability to these locations? Key Questions: (1) What are states’ policies and practices for verifying the identity of driver’s license/ID card applicants and how might they more effectively use SSNs or other tools to verify identity? (2) How does SSA assist states in verifying SSNs for driver’s license/ID card applicants and how can SSA improve the verification service it provides? Key Questions: (1) What are the status, plans, and technical and programmatic risks associated with the National Distress and Response System (NDRS) Modernization Project? (2) How is the Coast Guard addressing concerns with the new NDRS, such as communication coverage gaps and the inability to pinpoint distressed boaters? (3) How will Coast Guard’s new homeland security role affect the NDRS project? Key Questions: (1) What is the status of Customs’ plan to install radiation detection equipment at U.S. border crossings? (2) What is the basis for the plan’s time frame? (3) What is Customs’ technical capability to implement the plan? (4) How well is Customs coordinating with other agencies in the area of radiation detection? (5) What are the results of Customs’ evaluations of radiation detection equipment and how are the evaluations being used? Key Questions: (1) Was the $5 billion used only to compensate major air carriers for their uninsured losses incurred as a result of the terrorist attacks? (2) Were carriers reimbursed, per the act, only for increases in insurance premiums resulting from the attacks? Key Questions: (1) What is the budget profile for the Federal Aviation Administration’s and the Transportation Security Administration’s (TSA’s) aviation security research and development (R&D) program? (2) How effective is TSA’s strategy for determining which aviation security technologies to research and develop? (3) To what extent do stakeholders believe that TSA is researching and developing the most promising aviation security technologies? Key Questions: (1) How has the FAM program evolved, in terms of recruiting, training, retention, and operations since the transfer of program management to TSA? (2) To what extent has TSA implemented the necessary internal controls to meet the human capital and operational challenges of the FAM program? (3) To what extent has TSA developed plans and initiatives to accommodate future FAM program sustainability, growth and maturation? Transportation Security: Federal Action Needed to Help Address Security Challenges, GAO-03-843 (Washington, D.C.: June 30, 2003). Transportation Security Research: Coordination Needed in Selecting and Implementing Infrastructure Vulnerability Assessments, GAO-03-502 (Washington, D.C.: May 1, 2003). Rail Safety and Security: Some Actions Already Taken to Enhance Rail Security, but Risk-based Plan Needed, GAO-03-435 (Washington, D.C.: April 30, 2003). Coast Guard: Challenges during the Transition to the Department of Homeland Security, GAO-03-594T (Washington, D.C.: April 1, 2003). Transportation Security: Post-September 11th Initiatives and Long-Term Challenges, GAO-03-616T (Washington, D.C.: April 1, 2003). Aviation Security: Measures Needed to Improve Security of Pilot Certification Process, GAO-03-248NI (Washington, D.C.: February 3, 2003). (Not for Public Dissemination) Major Management Challenges and Program Risks: Department of Transportation, GAO-03-108 (Washington, D.C.: January 1, 2003). High Risk Series: Protecting Information Systems Supporting the Federal Government and the Nation’s Critical Infrastructure, GAO-03- 121 (Washington, D.C.: January 1, 2003). Aviation Safety: Undeclared Air Shipments of Dangerous Goods and DOT’s Enforcement Approach, GAO-03-22 (Washington, D.C.: January 10, 2003). Aviation Security: Vulnerabilities and Potential Improvements for the Air Cargo System, GAO-03-344 (Washington, D.C.: December 20, 2002). Mass Transit: Federal Action Could Help Transit Agencies Address Security Challenges, GAO-03-263 (Washington, D.C.: December 13, 2002). Aviation Security: Registered Traveler Program Policy and Implementation Issues, GAO-03-253 (Washington, D.C.: November 22, 2002). Computer Security: Progress Made, But Critical Federal Operations and Assets Remain at Risk, GAO-03-303T (Washington, D.C.: November 19, 2002). Container Security: Current Efforts to Detect Nuclear Materials, New Initiatives, and Challenges, GAO-03-297T (Washington, D.C.: November 18, 2002). Coast Guard: Strategy Needed for Setting and Monitoring Levels of Effort for All Missions, GAO-03-155 (Washington, D.C.: November 12, 2002). Mass Transit: Challenges in Securing Transit Systems, GAO-02-1075T (Washington, D.C.: September 18, 2002). Pipeline Safety and Security: Improved Workforce Planning and Communication Needed, GAO-02-785 (Washington, D.C.: August 26, 2002). Port Security: Nation Faces Formidable Challenges in Making New Initiatives Successful, GAO-02-993T (Washington, D.C.: August 5, 2002). Aviation Security: Transportation Security Administration Faces Immediate and Long-Term Challenges, GAO-02-971T (Washington, D.C.: July 25, 2002). Critical infrastructure Protection: Significant Challenges Need to Be Addressed, GAO-02-961T (Washington, D.C.: July 24, 2002). Combating Terrorism: Preliminary Observations on Weaknesses in Force Protection for DOD Deployments Through Domestic Seaports, GAO-02-955TNI (Washington, D.C.: July 23, 2002). (Not for Public Dissemination) Information Concerning the Arming of Commercial Pilots, GA0-02-822R (Washington, D.C.: June 28, 2002). Aviation Security: Deployment and Capabilities of Explosive Detection Equipment, GAO-02-713C (Washington, D.C.: June 20, 2002). (Classified) Coast Guard: Budget and Management Challenges for 2003 and Beyond, GAO-02-538T (Washington, D.C.: March 19, 2002). Aviation Security: Information on Vulnerabilities in the Nation’s Air Transportation System, GAO-01-1164T (Washington, D.C.: September 26, 2001). (Not for Public Dissemination) Aviation Security: Information on the Nation’s Air Transportation System Vulnerabilities, GAO-01-1174T (Washington, D.C.: September 26, 2001). (Not for Public Dissemination) Aviation Security: Vulnerabilities in, and Alternatives for, Preboard Screening Security Operations, GAO-01-1171T (Washington, D.C.: September 25, 2001). Aviation Security: Weaknesses in Airport Security and Options for Assigning Screening Responsibilities, GAO-01-1165T (Washington, D.C.: September 21, 2001). Aviation Security: Terrorist Acts Illustrate Severe Weaknesses in Aviation Security, GAO-01-1166T (Washington, D.C.: September 20, 2001). Aviation Security: Terrorist Acts Demonstrate Urgent Need to Improve Security at the Nation’s Airports, GAO-01-1162T (Washington, D.C.: September 20, 2001). Homeland Security: Information Sharing Responsibilities, Challenges, and Key Management Issues, GAO-03-715T (Washington, D.C.: May 8, 2003). Transportation Security Administration: Actions and Plans to Build a Results-Oriented Culture, GAO-03-190 (Washington, D.C.: January 17, 2003). Homeland Security: Management Challenges Facing Federal Leadership, GAO-03-260 (Washington, D.C.: December 20, 2002). Homeland Security: Information Technology Funding and Associated Management Issues, GAO-03-250 (Washington, D.C.: December 13, 2002). Homeland Security: Information Sharing Activities Face Continued Management Challenges, GAO-02-1122T (Washington, D.C.: October 1, 2002). National Preparedness: Technology and Information Sharing Challenges, GAO-02-1048R (Washington, D.C.: August 30, 2002). Homeland Security: Effective Intergovernmental Coordination Is Key to Success, GAO-02-1013T (Washington, D.C.: August 23, 2002). Critical Infrastructure Protection: Federal Efforts Require a More Coordinated and Comprehensive Approach for Protecting Information Systems, GAO-02-474 (Washington, D.C.: July 15, 2002). Critical Infrastructure Protection: Significant Homeland Security Challenges Need to Be Addressed, GAO-02-918T (Washington, D.C.: July 9, 2002). Homeland Security: Intergovernmental Coordination and Partnership Will Be Critical to Success, GAO-02-901T (Washington, D.C.: July 3, 2002). Homeland Security: New Department Could Improve Coordination but May Complicate Priority Setting, GAO-02-893T (Washington, D.C.: June 28, 2002). National Preparedness: Integrating New and Existing Technology and Information Sharing into an Effective Homeland Security Strategy, GAO-02-811T (Washington, D.C.: June 7, 2002). Homeland Security: Responsibility and Accountability for Achieving National Goals, GAO-02-627T (Washington, D.C.: April 11, 2002). National Preparedness: Integration of Federal, State, Local, and Private Sector Efforts is Critical to an Effective National Strategy for Homeland Security, GAO-02-621T (Washington, D.C.: April 11, 2002). Combating Terrorism: Intergovernmental Cooperation in the Development of a National Strategy to Enhance State and Local Preparedness, GAO-02-550T (Washington, D.C.: April 2, 2002). Combating Terrorism: Enhancing Partnerships Through a National Preparedness Strategy, GAO-02-549T (Washington, D.C.: March 28, 2002). Combating Terrorism: Critical Components of a National Strategy to Enhance State and Local Preparedness, GAO-02-548T (Washington, D.C.: March 25, 2002). Combating Terrorism: Intergovernmental Partnership in a National Strategy to Enhance State and Local Preparedness, GAO-02-547T (Washington, D.C.: March 22, 2002). Homeland Security: Progress Made; More Direction and Partnership Sought, GAO-02-490T (Washington, D.C.: March 12, 2002). Combating Terrorism: Key Aspects of a National Strategy to Enhance State and Local Preparedness, GAO-02-473T (Washington, D.C.: March 1, 2002). Homeland Security: Challenges and Strategies in Addressing Short- and Long-Term National Needs, GAO-02-160T (Washington, D.C.: November 7, 2001). Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts, GAO-02-208T (Washington, D.C.: October 31, 2001). Combating Terrorism: Considerations for Investing Resources in Chemical and Biological Preparedness, GAO-02-162T (Washington, D.C.: October 17, 2001). Information Sharing: Practices That Can Benefit Critical Infrastructure Protection, GAO-02-24 (Washington, D.C.: October 15, 2001). Homeland Security: Key Elements of a Risk Management Approach, GAO-02-150T (Washington, D.C.: October 12, 2001). Chemical and Biological Defense: Improved Risk Assessment and Inventory Management Are Needed, GAO-01-667 (Washington, D.C.: September 28, 2001). Critical Infrastructure Protection: Significant Challenges in Safeguarding Government and Privately Controlled Systems from Computer-Based Attacks, GAO-01-1168T (Washington, D.C.: September 26, 2001). Homeland Security: A Framework for Addressing the Nation’s Efforts, GAO-01-1158T (Washington, D.C.: September 21, 2001). Combating Terrorism: Selected Challenges and Related Recommendations, GAO-01-822 (Washington, D.C.: September 20, 2001). This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The economic well being of the United States is dependent on the expeditious flow of people and goods through the transportation system. The attacks on September 11, 2001, illustrate the threats to and vulnerabilities of the transportation system. Prior to September 11, the Department of Transportation (DOT) had primary responsibility for the security of the transportation system. In the wake of September 11, Congress created the Transportation Security Administration (TSA) within DOT and gave it primary responsibility for the security of all modes of transportation. TSA was recently transferred to the new Department of Homeland Security (DHS). GAO was asked to examine the challenges in securing the transportation system and the federal role and actions in transportation security. Securing the nation's transportation system is fraught with challenges. The transportation system crisscrosses the nation and extends beyond our borders to move millions of passengers and tons of freight each day. The extensiveness of the system as well as the sheer volume of passengers and freight moved makes it both an attractive target and difficult to secure. Addressing the security concerns of the transportation system is further complicated by the number of transportation stakeholders that are involved in security decisions, including government agencies at the federal, state, and local levels and thousands of private sector companies. Further exacerbating these challenges are the financial pressures confronting transportation stakeholders. For example, the sluggish economy has weakened the transportation industry's financial condition by decreasing ridership and revenues. The federal government has provided additional funding for transportation security since September 11, but demand has far outstripped the additional amounts made available. It will take the collective effort of all transportation stakeholders to meet existing and future transportation challenges. Since September 11, transportation stakeholders have acted to enhance security. At the federal level, TSA primarily focused on meeting aviation security deadlines during its first year of existence and DOT launched a variety of security initiatives to enhance the other modes of transportation. For example, the Federal Transit Administration provided grants for emergency drills and conducted security assessments at the largest transit agencies, among other things. TSA has recently focused more on the security of the maritime and land transportation modes and is planning to issue security standards for all modes of transportation. DOT is also continuing their security efforts. However, the roles and responsibilities of TSA and DOT in securing the transportation system have not been clearly defined, which creates the potential for overlap, duplication, and confusion as both entities move forward with their security efforts.
The federal government’s increasing demand for IT led to a dramatic rise in the number of federal data centers and a corresponding increase in operational costs. According to OMB, the federal government had 432 data centers in 1998 and more than 1,100 in 2009. Operating such a large number of centers has been and continues to be a significant cost to the federal government, including costs for hardware, software, real estate, and cooling. For example, in 2007, the Environmental Protection Agency (EPA) estimated that the electricity cost to operate federal servers and data centers across the government was about $450 million annually. According to the Department of Energy (Energy), data center spaces can consume 100 to 200 times more electricity than a standard that server utilization rates as low office space. In 2009, OMB reportedas 5 percent across the federal government’s estimated 150,000 servers were a driving factor in the need to establish a coordinated, government- wide effort to improve the efficiency, performance, and environmental footprint of federal data center activities. Concerned about the size of the federal data center inventory and the potential to improve the efficiency, performance, and the environmental footprint of federal data center activities, OMB, under the direction of the Federal CIO, established FDCCI in February 2010. This initiative’s four high-level goals are to promote the use of “green IT” by reducing the overall energy and real estate footprint of government data centers; reduce the cost of data center hardware, software, and operations; increase the overall IT security posture of the government; and shift IT investments to more efficient computing platforms and technologies. As part of FDCCI, OMB required the 24 agencies to identify a data center consolidation program manager to lead the agency’s consolidation efforts. In addition, agencies were required to submit an asset inventory baseline and other documents that would result in a plan for consolidating their data centers. The asset inventory baseline was to contain detailed information on each data center and identify the consolidation approach to be taken for each one. It would serve as the foundation for developing the final data center consolidation plan. The data center consolidation plan would serve as a technical road map and approach for achieving the targets for infrastructure utilization, energy efficiency, and cost efficiency and was to be incorporated into the agency’s fiscal year 2012 budget. In October 2010, OMB reported that all of the agencies had submitted an inventory and plan. In addition, in a series of memorandums, OMB described plans to monitor agencies’ consolidation activities on an ongoing basis. Starting in fiscal year 2011, OMB required agencies to provide an updated data center asset inventory at the end of every third quarter and an updated consolidation plan (including any missing elements) at the end of every fourth quarter. Further, starting in fiscal year 2012, OMB required agencies to provide a consolidation progress report at the end of every quarter. This progress information has subsequently been made available on the federal website dedicated to providing the public with access to datasets developed by federal agencies, http://data.gov. Pursuant to requirements of the Government Performance and Results Act Modernization Act of 2010 (GPRAMA), in February 2012, OMB designated data center consolidation as 1 of its 14 priority goals (now known as cross-agency priority goals) because of its importance to improving management across the federal government. These goals are designed to cover areas where increased cross-agency collaboration is needed to improve progress towards the achievement of goals shared by multiple contributing agencies. In March 2014, OMB announced the creation of a new set of goals in its submission of the President’s fiscal year 2015 budget, which did not include data center consolidation. According to OMB, although the updated set of goals did not include the data center consolidation goal because it had reached the end of its goal cycle time frame under GPRAMA, the effort will remain an administration priority. While OMB is primarily responsible for FDCCI, the agency designated the Federal CIO Council—the principal interagency forum to improve IT- related practices across the federal government—to lead the effort. In addition, OMB originally identified two additional organizations to assist in managing and overseeing the initiative: The GSA FDCCI Program Management Office to support OMB in planning, execution, management, and communications. The Task Force is comprised of the data center consolidation program managers from each agency. According to its charter, the Task Force is critical to supporting collaboration across agencies, including identifying and disseminating key information, solutions, and processes that will help agencies in their consolidation efforts. However, in December 2013, GSA and Task Force officials stated that the GSA’s Program Management Office would no longer be supporting FDCCI and its responsibilities were being transitioned to OMB and the Task Force. “…a data center is…a closet, room, floor or building for the storage, management, and dissemination of data and information and computer systems and associated components, such as database, application, and storage systems and data stores [excluding facilities exclusively devoted to communications and network equipment (e.g., telephone exchanges and telecommunications rooms)]. A data center generally includes redundant or backup power supplies, redundant data communications connections, environmental controls…and special security devices housed in leased, owned, collocated, or stand-alone facilities.” Under the first definition, OMB identified 2,094 data centers in July 2010. Using the new definition from October 2011, OMB estimated that there were a total of 3,133 federal data centers in December 2011, and its goal was to consolidate approximately 40 percent, or 1,253 data centers, for a savings of approximately $3 billion by the end of 2015. OMB, Implementation Guidance for the Federal Data Center Consolidation Initiative (Washington, D.C.: Mar. 19, 2012). Since 2011, the number of federal data centers reported by agencies has continued to grow. In July 2013, we testified that 22 of the 24 FDCCI agencies had collectively reported 6,836 data centers in their inventories—an increase of about 3,700 compared to OMB’s previous estimate from December 2011. According to the Federal CIO, the increase in data centers was primarily due to the expanded definition of a data center and improved inventory reporting by the agencies. More recently, our analysis of agencies’ May 2014 data center inventories indicated that agencies collectively reported a total of 9,658 data centers. Of the total reported data centers, 242 were reported by agencies as “core” data centers—meaning that they are primary consolidation points for agency enterprise IT services and not planned for closure, while the remaining 9,416 were reported as “non-core.”memorandum states that the goal is for agencies to close 40 percent of the total non-core data centers, or 3,766 data centers based on the May 2014 inventory data, by the end of fiscal year 2015. OMB’s March 2013 Since 2011, agencies have reported their data center closures and planned closures on http://data.gov. As of May 2014, agencies collectively reported that they had closed a total of 976 data centers, and were planning to close an additional 2,689 data centers—for a total of 3,655—by the end of September 2015. See figure 1 for a summary of the total number of the federal data centers reported in agencies’ inventories and closures as reported by agencies on http://data.gov over time, and table 1 for a depiction of the total number of data centers (including a breakdown of core and non-core centers) reported in agencies’ May 2014 inventory submissions, and reported and planned closures. In March 2012, OMB launched the PortfolioStat initiative, which requires agencies to conduct an annual agency-wide IT portfolio review to, among other things, reduce commodity IT spending and demonstrate how its IT investments align with the agency’s mission and business functions. PortfolioStat is designed to assist agencies in assessing the current maturity of their IT portfolio management process, make decisions on eliminating duplication, and move to shared solutions in order to maximize the return on IT investments across the portfolio. In September 2012, the Federal CIO wrote in an e-mail to agencies that OMB was planning to integrate FDCCI with the PortfolioStat initiative to allow agencies to focus on an enterprise-wide approach to address all commodity IT, including data centers, in an integrated, comprehensive plan and that agencies should continue to focus on optimizing those data centers that are essential to delivering taxpayer services, while continuing to close those that are duplicative. In addition, the e-mail directed agencies to delay submitting their October 1, 2012 submissions of updated consolidation plans until further guidance could be provided. However, agencies were still to report quarterly updates on their data center closures. In March 2013, OMB issued a memorandum documenting the integration of FDCCI with PortfolioStat. Among other things, the memorandum discussed OMB’s efforts to further the PortfolioStat initiative by incorporating several changes, such as consolidating previously collected IT-related plans, reports, and data submissions. The guidance also stated that, to more effectively measure the efficiency of an agency’s data center assets, agencies would also be measured by the extent to which their data centers are optimized for total cost of ownership by incorporating metrics for data center energy, facility, labor, and storage, among other things. OMB indicated in its memorandum that these metrics would be developed by the Task Force. This March 2013 memorandum also established new agency reporting requirements and related time frames. Specifically, agencies were no longer required to submit the data center consolidation plans previously required under FDCCI. Rather, agencies were to submit information to OMB via three primary means—an information resources management strategic plan, an enterprise road map, and an integrated data collection channel. In addition, agencies were still required to update their data center inventories yearly and report quarterly on http://data.gov regarding their consolidation progress. More recently, in May 2014, OMB issued a memorandum updating its PortfolioStat guidance for fiscal year 2014. As in past PortfolioStat guidance, the memorandum discussed the importance of PortfolioStat sessions—data-driven reviews of agency portfolio management between the Federal CIO, Agency Deputy Secretary, and other senior agency officials—as a means to continue to drive cost savings. OMB’s guidance also reinforced the need for agencies to continue to consolidate their non- core data centers while optimizing their core data centers using metrics established by the Task Force, and documented in OMB’s memorandum. These metrics are discussed in more detail later in this report. We have previously reported on OMB’s efforts to consolidate federal data centers. In March 2011, we identified data center consolidation as one of the 81 areas within the federal government with opportunities to reduce potential duplication, overlap, and fragmentation. In this regard, we reported on the status of FDCCI and noted that data center consolidation made sense economically and was a way to achieve more efficient IT operations, but that challenges existed. For example, agencies reported facing challenges in ensuring the accuracy of their inventories and plans, providing upfront funding for the consolidation effort before any cost savings accrue, and overcoming cultural resistance to such major organizational changes, among other things. In July 2011, we issued a report on the status of FDCCI and found that only 1 of the 24 agencies had submitted a complete inventory and no agency had submitted complete plans. agencies to document the steps they had taken, if any, to verify the inventory data. We concluded that until these inventories and plans were complete, agencies would not be able to implement their consolidation activities and realize expected cost savings. Moreover, without an understanding of the validity of agencies’ consolidation data, OMB could not be assured that agencies were providing a sound baseline for estimating consolidation savings and measuring progress against those goals. Accordingly, we made several recommendations to OMB, including that the Federal CIO require that agencies, when updating their data center inventory, state what actions were taken to verify the information in the inventory and to identify any associated limitations on the data, and to complete the missing elements in their inventories and consolidation plans. Further, OMB had not required OMB generally agreed with our report and has since taken actions to address our recommendations. For example, in July 2011, OMB required agency CIOs to submit a letter that identified steps taken to verify their data center inventory information and attest to the completeness of their consolidation plan. In addition, in March 2012, OMB required that all agencies complete all elements missing from their consolidation plans by the end of the fourth quarter of every fiscal year. GAO-11-565. Additionally, in July 2012, we updated our review of FDCCI’s status and found that, while agencies’ 2011 inventories and plans had improved as compared to their 2010 submissions, only 3 agencies had submitted a complete inventory and only 1 agency had submitted a complete consolidation plan. In addition, we noted that 3 agencies had submitted their inventory using an outdated format, in part, because OMB had not publicly posted its revised guidance. Notwithstanding these weaknesses, we noted that 19 agencies reported anticipating about $2.4 billion in cost savings between 2011 and 2015. We also reported that none of five selected agencies had a master program schedule or cost-benefit analysis that was fully consistent with best practices. To assist agencies with their data center consolidation efforts, OMB had sponsored the development of a FDCCI total cost of ownership estimated costs for consolidation; however, agencies were not required to use the cost model as part of their cost estimating efforts. Accordingly, we reiterated our prior recommendation that agencies complete missing plan and inventory elements and made new recommendations to OMB to publically post guidance updates on the FDCCI website and to require agencies to use its cost model. model that was intended to help agencies refine their OMB generally agreed with our recommendations and has since taken steps to address them. More specifically, OMB posted its 2012 guidance for updating data center inventories and plans, as well as guidance for reporting consolidation progress, to the FDCCI public website. Further, the website has been updated to provide prior guidance documents and OMB memorandums. In addition, OMB’s 2012 consolidation plan guidance required agencies to use the cost model as they developed their 2014 budget request. OMB refers to total cost of ownership as all associated data center-related activities and costs without regard to ownership, project association, or funding line. had not measured agencies’ progress against key performance measures, including its cost savings goal, or ensured that other key oversight responsibilities, such as approving agencies consolidation plans on the basis of their completeness, were being fully executed. We reported that OMB had not determined agencies’ progress against its cost savings goal because, according to OMB staff, the agency had not determined a consistent and repeatable method for tracking cost savings and that the weaknesses in oversight were due, in part, to OMB not ensuring that assigned responsibilities were being executed. Accordingly, we recommended that OMB track and report on key performance measures, including cost savings, and improve the execution of important oversight responsibilities. OMB generally agreed with our recommendations and has since taken some initial actions to implement them, including tracking and reporting on data center consolidation cost savings on a quarterly basis. Finally, between May 2013 and June 2014, we testified on the status of FDCCI. Notably, in July 2013, we testified that, while agencies continued to make progress by closing an additional 64 data centers compared to the total number reported through the end of December 2012, the number of federal data centers had grown significantly since OMB’s December 2011 estimate of approximately 3,133 data centers. Specifically, 22 of the 24 FDCCI agencies had collectively reported 6,836 data centers in their inventories—an increase of about 3,700 as compared to OMB’s previous We concluded that it would be important estimate from December 2011.for OMB to be transparent on agencies’ progress against its performance metrics going forward. For FDCCI, OMB originally established a goal of achieving $3 billion in cost savings by the end of 2015. Pursuant to this goal, agencies have reported achieving more than a billion dollars in savings and avoidances through fiscal year 2013 and are planning a total of about $3.3 billion in savings and avoidances by the end of fiscal year 2015—an amount that is approximately $300 million higher than OMB’s goal. Between fiscal years 2011 and 2017, agencies reported planning approximately $5.3 billion in total savings and avoidances. However, planned cost savings may be higher because six agencies with as many as 67 data center closures each have been limited in their abilities to fully report their savings. In addition, slightly more than half of agencies with planned cost savings are underreporting their fiscal years 2012 through 2015 figures to OMB by approximately $2.2 billion. While several agencies noted internal agency communication issues as the reasons for not reporting savings to OMB, other agencies were unable to provide a reason. Until agencies fully report their savings, the total planned cost savings and avoidances of $5.3 billion will be understated. Since launching FDCCI in 2010, achieving cost savings has been a primary goal of the initiative. As previously discussed, one of the original high-level objectives was to reduce the costs of data center hardware, software, and operations. OMB subsequently expanded on this goal and, in February 2012, stated that data center consolidation had the potential to achieve $3 billion dollars in savings by the end of 2015. Pursuant to these goals, OMB required agencies to describe year-by-year investments and cost savings in their 2010 and 2011 consolidation plans and, beginning in August 2013, has required agencies to identify and report all cost savings and avoidances related to data center consolidation, among other areas, to OMB as part of a quarterly data collection process known as the integrated data collection. Most of the 24 agencies are achieving cost savings or avoidances from their data center consolidation efforts. Specifically, between fiscal years 2011 and 2013, 19 agencies collectively reported achieving an estimated $1.1 billion in cost savings and avoidances. Notably, Defense, the Department of Homeland Security (DHS), and the Department of the Treasury (Treasury) account for approximately $850 million (or 74 percent) of the reported estimated savings through fiscal year 2013. The remaining 5 agencies that did not report savings between fiscal years 2011 and 2013 cited varied reasons for not being able to do so, which included difficulties in determining baseline data center costs, upfront costs that have exceeded savings to date, and a lack of electrical metering to determine power usage savings. The methodologies used to calculate savings varied across the 19 agencies that reported estimated or actual savings and avoidances through fiscal year 2013; however, most of these agencies estimated their figures. Specifically, 3 agencies—the Department of Education (Education), EPA, and the National Science Foundation (NSF)—reported actual cost savings and avoidances, which they determined by calculating differences in executed budget or contract amounts over time. The remaining 16 agencies estimated their cost savings and avoidances. As examples, GSA estimated its savings using the department’s total cost of ownership model; the Department of the Interior (Interior) used post- consolidation forms collected from its component bureaus and offices to estimate cost savings related to areas such as rent, utilities, and personnel after a consolidation activity was completed; and Treasury estimated savings resulting from reductions in the percentage of IT infrastructure investment spending as compared to total IT spending over time. Officials at these agencies stated that they were limited to reporting estimating savings because of challenges in determining actual savings, including the lack of electrical metering to calculate power usage savings, budget and accounting systems that are not structured to account for the costs of individual data centers, and difficulties in determining costs and savings when data centers are located in multipurpose facilities. These issues are discussed in more detail later in this report. See table 2 for a listing of agencies’ data center closures, cost savings and cost avoidances between fiscal years 2011 and 2013, and whether the agency savings are estimated. As prescribed by OMB’s initial guidance on data center consolidation, the 19 agencies that reported achieving cost savings and avoidances did so using a variety of approaches. While these approaches can be grouped into four key areas—decommissioning, consolidation, cloud computing, and virtualization—agencies generally employed, and achieved cost savings and avoidances using, multiple approaches at the same time. For example, NSF officials stated that in order to reduce the agency’s dependence on onsite infrastructure, the agency has been focused on increasing virtualization and consolidation of servers and storage, while continuing to adopt cloud computing technologies. See table 3 for a description of the four approaches and key examples of agency-reported savings or avoidances in each. In addition to savings through fiscal year 2013, our analysis of estimated future savings shows that, collectively, agencies are reporting that they expect to exceed OMB’s cost savings goal by the end of fiscal year 2015 and continue to achieve significant savings in future years. Specifically, 21 agencies collectively reported planning a total of about $3.3 billion in savings and avoidances by the end of 2015—an amount that is approximately $300 million higher than OMB’s original $3 billion goal. Further, through fiscal year 2017, these agencies collectively reported planning an additional $2.1 billion in cost savings and avoidances, for a total of approximately $5.3 billion. Five agencies—the Department of Agriculture (Agriculture), Defense, DHS, the Department of Transportation (Transportation), and Treasury—account for about $4.9 billion (or approximately 91 percent) of the total savings reported. See table 4 for a listing of agencies’ total cost savings and cost avoidances between fiscal years 2011 and 2017. The extent of cost savings and avoidances being reported by agencies beyond fiscal year 2015 highlights the importance of OMB continuing to track and report on such savings beyond the time frame of its initial goal. Further, with many agencies having not yet reported on their planned savings, the savings beyond fiscal year 2015 may be higher than previously discussed. In this regard, we have previously recommended that OMB extend the horizon for realizing cost savings from FDCCI, as doing so could provide OMB and FDCCI stakeholders with input and information on the benefits of consolidation beyond OMB’s initial goal.OMB neither agreed nor disagreed with our recommendation but stated that, as the FDCCI and PortfolioStat initiatives proceed and continue to generate savings, OMB would consider whether updates to the current time frame are appropriate. As previously mentioned, OMB’s March 2013 memorandum identified the requirements for reporting cost savings from data center consolidation. Specifically, the memorandum stated that agencies are required to report their data center consolidation cost savings and avoidances, among other areas, to OMB as part of a quarterly integrated data collection process. OMB’s May 2014 memorandumrequirements for integrated data collection submissions. Agencies can currently input cost savings and avoidances for fiscal years 2012 through 2015 into the web-based portal used to submit their integrated data collection submissions. Finally, standards for internal control emphasize the need for federal agencies to establish plans to help ensure goals and reiterated the objectives can be met, including compliance with applicable laws and regulations. Although agencies are already collectively reporting approximately $5.3 billion in planned cost savings and avoidances from their consolidation efforts, these savings may be higher because 6 of the 24 agencies, claiming between 11 and 67 data center closures each, have been limited in their abilities to report savings. For example, although Interior reported closing 65 data centers as of May 2014, the agency cited significant challenges in obtaining costs and related savings information from its component agencies. In addition, the National Aeronautics and Space Administration (NASA) reported that, as of May 2014, it had closed 25 data centers; however, while the agency has been able to report $1.3 million in savings through fiscal year 2013, agency officials stated that NASA has been otherwise limited in its ability to identify cost savings and avoidances because of the agency’s complex organizational structure, which includes multiple centers with multiple missions and multiple IT contractors utilizing data centers within multipurpose facilities. Similar challenges were also identified by other agencies, as discussed later in this report. Table 5 shows the agencies with limited or no savings relative to their consolidation efforts and their reasons for not being able to fully report savings. Considering that cost savings is one of OMB’s original high-level goals of FDCCI and reporting such savings is currently required on a quarterly basis, OMB has a responsibility for ensuring that agencies are identifying the full extent of cost savings from their consolidation efforts. Further, as previously mentioned, OMB’s PortfolioStat guidance requires yearly review sessions of agency portfolio management (including data center consolidation) with the Federal CIO and senior agency officials and notes that these reviews are critical to driving cost savings. We previously found that all agencies held PortfolioStat sessions with OMB in fiscal year 2012. In addition, agencies were required to hold sessions again in 2013. However, after 2 years of PortfolioStat sessions, the six agencies identified in the table have been limited in their ability to report savings from their data center consolidation efforts. In addition, slightly more than half of the agencies with cost savings and avoidances did not fully report them to OMB—a requirement of OMB’s quarterly integrated data collection process. Specifically, of the 21 agencies with actual and estimated fiscal years 2012 through 2015 cost savings and avoidances, 10 agencies fully reported their savings and avoidances to OMB through the integrated data collection process, 8 agencies partially reported this information, and 3 agencies did not report it. As a result, agencies collectively reported savings for fiscal years 2012 through 2015 of approximately $3.1 billion to us, as compared to only about $876 million that agencies reported to OMB, meaning that the savings have been underreported to OMB by approximately $2.2 billion. See table 6 for a listing of agencies and a comparison of their data center consolidation savings as reported to GAO and through OMB’s integrated data collection process. While several agencies noted internal agency communication issues as a reason that their savings and avoidances were not fully reported, other agencies were not able to provide a reason. These shortcomings in agency reporting have resulted in OMB not being able to fully report agencies’ data center consolidation cost savings and avoidances in its quarterly reports to Congress on the status of federal IT reform efforts, in accordance with its responsibilities as set forth in law. For example, OMB’s May 2014 report to Congress noted total fiscal years 2012 and 2013 data center consolidation cost savings and avoidances of slightly less than $329 million, as compared to the approximately $951 million that agencies reported to us that they achieved over that same time period—a difference of approximately $622 million. Until OMB assists those agencies with limited or no cost savings reported, agencies may not be able to identify the full extent of savings from their consolidation efforts and the total planned cost savings and avoidances of approximately $5.3 billion will be understated. Further, until agencies fully report their cost savings and avoidances to OMB, Congress may be limited in its ability to oversee agencies’ progress against key initiative goals. In 2011 and 2012, we reported on the broad challenges that agencies were facing during data center consolidations. These included FDCCI- related, cultural, funding-related, operational, and technical challenges. In 2014, agencies reported that many of the same challenges still exist and impacted their ability to achieve cost savings through consolidation efforts. In addition, agencies identified many new challenges that were specific to achieving cost savings. As we found previously, some challenges are more common than others, with the most-reported challenge being faced by a total of eight agencies. One agency—HUD— did not report any challenges. Table 8 details the reported challenges and the numbers of agencies experiencing that challenge. The table is followed by a discussion of the most prevalent challenges. Agencies reported that the most significant operational challenges included difficulty in obtaining information (such as data center inventory and cost savings data) from component organizations and determining costs and realizing savings when data centers were located in shared, multipurpose facilities. Specifically, 6 agencies reported that obtaining consolidation-related data from component organizations as a challenge to achieving cost savings which is similar to, but not as prevalent as, the 10 agencies we found having difficulty providing good quality asset inventories in 2012. For example, Defense’s Data Center Consolidation Lead noted that getting component organizations to report all of their data centers remains a challenge in achieving cost savings, particularly in the case of smaller, single-server data centers (e.g., research stations or computers that are only used by a few individuals and are often not reported until replacement or enhancement is needed). Additionally, Agriculture’s Associate CIO for Data Center Operations stated that it was often difficult to determine actual cost savings because the department’s 32 component agencies did not track their total cost of IT, as portions are funded from many different sources versus being under the control of the component agency or office CIO (e.g., building rent and utilities, salaries, construction, etc.). In addition, whereas we found in 2012 that one agency had difficulty with identifying and quantifying actual costs associated with data center facilities, we found that five agencies reported that it was difficult to determine costs and realized savings when data centers were located in shared, multipurpose facilities. For example, Energy reported that because several of its data centers were located in shared-use facilities, it was difficult for the department to determine the centers’ total operating costs without an additional investment in advanced electricity metering. Energy also noted that it is difficult to determine the total cost savings and avoidances associated with the closure of these data centers until they are decommissioned and the vacated floor space is repurposed. EPA also reported that its data centers and server rooms were housed within mixed-use facilities, which generally cannot be discarded. Further, EPA expects that most former data center spaces will continue to provide local telecommunications and building access support services, which reduces potential substantive building operational cost reductions due to room decommissioning. Agencies reported that the most significant technical challenges included a lack of electricity metering to determine power usage information and increased telecommunication costs after relocating small data centers or applications. The lack of electricity metering is similar to the difficulty we previously found for 15 agencies in 2012 with obtaining power usage information. Our current work found that 8 agencies reported that a lack of electricity metering to determine power usage was a challenge. For example, the Department of Labor (Labor) reported that a lack of electricity metering at many data centers prevented the department from accurately reporting energy savings attributed to the consolidation effort. Labor officials added that it was difficult to perform power usage efficiency calculations because the data to feed the calculations were not available. As other examples, Transportation reported that its smaller data centers were operated in GSA-owned buildings which did not have electricity metering for the data center spaces. Transportation also noted that many of these spaces also contain telecommunications equipment which would remain after the data center equipment is relocated or decommissioned, which would mean the closures are not expected to produce significant savings. Further, NASA reported that it had encountered difficulties in metering its older, less-efficient facilities and that the modifications needed to make the facilities more efficient would require a significant amount of resources and yield a low return on investment. In addition, those modifications would adversely impact current operations due to the requisite power outages to install metering equipment. Commerce, Defense, Interior, NSF, and the Social Security Administration (SSA) also reported a lack of electricity metering as a challenge. Regarding increased telecommunication costs, Interior and Transportation both reported this area as a challenge. Interior noted that the higher telecommunication premiums realized from its effort often offset the savings from consolidating a large number of small and closet- sized data centers. Transportation officials also stated that that the relocation of locally run applications to a consolidated data center may lead to increased telecommunications costs. Agencies reported two financial challenges related to obtaining the funding required within their agency for consolidation efforts, as well as budget and accounting system issues that impacted their ability to achieve cost savings. In 2012, we found that nine agencies considered obtaining the funding required for consolidation and migration efforts to be a challenge. In 2014, eight agencies identified this challenge. For example, VA reported that the investment funding for all phases of its consolidation plan has not been available from the department as initially scheduled and, as a result has had to evolve its plan to address the risk of continued investment funding shortfalls so that it can continue to make progress towards its consolidation goals. In addition, SBA officials stated that a lack of funding allocated to implementing its data center consolidation strategy has been the primary challenge in achieving data center consolidation cost savings. Officials added that, in light of this challenge, the agency continues to examine federal cloud and GSA e- mail-as-a-service offerings, and initiated a request to the SBA investment governance process and a fiscal year 2015 request to pilot and migrate agency e-mail to a GSA vendor-managed cloud provider. In addition, six agencies reported having budget and accounting systems that were not structured to account for individual data centers. For example, Energy officials stated that data centers have generally operated from separate facility and IT operations budgets and specific facility cost elements have not been tracked. In addition, officials added that different budgets are used to support different data centers, resulting in a lack of a consolidated budget for data centers, and has made documenting costs and related savings difficult. In addition, VA officials stated that IT costs often encompass multiple data centers and user facilities, making it challenging to parse the costs to the individual data centers and determine related savings. Agencies reported that the most significant cultural challenges included having a decentralized organizational structure that was not geared toward consolidation and accepting the cultural changes that were part of consolidation. In 2012, we found that two agencies encountered cultural challenges related to having a decentralized organization structure and five agencies had difficulty accepting cultural change as part of the consolidation effort. Our current work showed six agencies encountered cultural challenges related to having a decentralized organization structure. For example, GSA reported that its foremost challenge was that its data center costs and expenses were distributed across a federated organization. The agency indicated that the costs for rents; leases; personnel; and equipment repair, replacement, and service contracts were distributed across 11 regions, six components, and their subordinate organizations. Further, the Department of Justice (Justice) reported that its federated organizational structure had made it especially challenging to implement enterprise-wide initiatives such as data center consolidation. Officials noted that this was due, in part, to the need to build consensus for, plan, and then implement the consolidation changes, which does not happen quickly in a decentralized environment. In addition, three agencies stated that accepting the cultural changes required to implement their consolidation efforts impacted their ability to achieve cost savings. For example, Justice also noted that its federated environment made it difficult for people to accept the cultural changes that are part of consolidation. Further, officials from Agriculture reported the challenge of accepting culture change as it encountered resistance to consolidation-related changes, including the use of cloud computing, from component agency personnel. In any significant IT initiative, it is important that both successes and challenges be highlighted. In the case of FDCCI, a success highlights approaches and strategies that have helped agencies to achieve cost savings and fulfill the intent of the initiative. Conversely, a challenge identifies an area that was impacting an agency’s ability to achieve cost savings and meet the intent of this government-wide effort. In light of how closely the successes and challenges reported by agencies relate to achieving cost savings—a key OMB goal for FDCCI—it will be important for OMB to continue to provide leadership and guidance to the initiative. This includes, as we have previously recommended, utilizing the Task Force—the primary organization responsible for supporting collaboration and knowledge transfer across the FDCCI agencies—to monitor and assist with agencies’ consolidation efforts. Leading practices have established the need for initiatives to develop performance measures to gauge progress. According to government and industry leading practices, performance measures should be measurable, outcome-oriented (i.e., identify targets for improving performance), and actively tracked and reported. In accordance with these principles, OMB’s March 2013 memorandumcenter metrics for energy, facility, labor, storage, virtualization and cost per operating system to enable the measurement of the extent to which federal agency core data centers are optimized for total cost of ownership. directed the Task Force to develop data In May 2014, OMB released a set of 11 data center consolidation optimization metrics established by the Task Force. These metrics address all of the categories defined in the March 2013 memorandum. In addition, related targets to be achieved by the end of fiscal year 2015 have been established for all the metrics except for the cost-per- operating-system metric, which provides for measuring progress on optimizing data center costs. According to a Task Force official, current data center inventory data (already required to be submitted by agencies on at least a yearly basis) will be used to calculate agencies’ progress using the metrics and related targets. See table 9 for a list of the metrics, including the related category, a brief description, and the established target for each metric. According to OMB staff from the Office of E-Government and Information Technology, there have been challenges in reaching consensus on the cost-per-operating-system target. Specifically, the staff stated that the Task Force has had difficulty with developing a baseline for cloud computing costs that could be used to establish an appropriate target because private sector cloud providers are continually cutting the prices for their services. Development of the cost-per-operating-system target is expected to continue and OMB staff stated that the Task Force expects to finalize the target in the fall of 2014. In addition, although low server utilization rates were a driving force cited by OMB in launching FDCCI, the new data center optimization metrics do not address this key issue. As previously mentioned, in 2009, OMB reported that server utilization rates were as low as 5 percent across the federal government’s servers. OMB subsequently required agencies to report on server utilization percentage as part of their 2011 and 2012 consolidation plans and included a suggested target of 60 to 70 percent server utilization in its 2011 and 2012 FDCCI consolidation plan guidance. OMB later eliminated the requirement for agencies to continue to update their consolidation plans, but indicated in its March 2013 memorandum that it would continue tracking agencies’ progress through other means, including the data center optimization metrics. However, a metric for server utilization was not included in the final metrics established by the Task Force. According to an official from the GSA FDCCI Program Management Office that led initial efforts to establish the metrics, server utilization was not included as a metric for a variety of reasons, including that agencies have not traditionally collected the necessary data to be able to calculate server utilization, agencies do not have the server monitoring capabilities required to collect such data, and improvements in other areas of the metrics (such as virtualization) would likely result in higher server utilization. However, as previously mentioned, with server utilization a driving factor for FDCCI and measuring as low as 5 percent as recently as 2009, determining progress against this metric is critical to improving the efficiency, performance, and environmental footprint of federal data center activities. Without an established target for one of its key cost metrics, the cost per operating system, OMB may not be getting complete information about agencies’ progress in their data center optimization efforts and, therefore, may be lacking important insight that limits its ability to take corrective actions as needed. In addition, without a specific metric for server utilization, OMB may not be fully aware of agencies’ progress on a key metric that was a driving force in launching FDCCI. After slightly more than 4 years into FDCCI, agencies have begun to report significant savings from their consolidation efforts—most notably, Defense, DHS, and Treasury, which account for 74 percent of the reported savings to date. Furthermore, with approximately $3.3 billion in total planned savings being reported by agencies through fiscal year 2015, meeting OMB’s savings goal is increasingly more likely and, if executed as planned, would represent a significant accomplishment for OMB and the FDCCI agencies. However, limited or no savings achieved at agencies with major consolidation efforts underway suggests that additional actions are necessary. OMB’s and these agencies’ continued efforts to address challenges and identify cost savings opportunities, through the use of such existing mechanisms as PortfolioStat sessions, will result in even more savings. Additionally, agencies’ continued underreporting of consolidation savings will limit OMB’s ability to accurately track agencies’ progress and report to Congress, a point highlighted by the significant understatement of agency-reported savings—by approximately $622 million through fiscal year 2013—in OMB’s recent congressional submission, and—by over $2.2 billion through fiscal year 2015—in agency data submissions to OMB. As the federal consolidation effort has matured over the past few years, agencies have reported noteworthy successes in achieving cost savings—particularly in leveraging virtualization and cloud computing as a means to achieve such savings. These constructive experiences, which stem from OMB’s recommended consolidation strategies, indicate that FDCCI is moving in the right direction. However, as agencies work toward achieving their cost savings goals, many continue to report challenges related to gathering the necessary technical information from which to calculate savings and funding the consolidation itself. While these challenges are consistent with those reported in the past, others, such as determining savings when data centers are located in multipurpose facilities, have become more prominent. Such a dynamic environment reinforces the need for agencies to remain in communication with OMB in order to facilitate knowledge sharing and transfer and for OMB to continue to provide leadership and guidance, as we have previously recommended. OMB’s May 2014 publication of the data center optimization metrics is a considerable step forward in helping OMB provide better oversight of agencies’ efforts to optimize their core data centers. Furthermore, targets established for nearly all the metrics will provide agencies with clear and transparent goals to guide their data center optimization efforts. However, the continued absence of a metric for server utilization, despite OMB’s previously-reported concerns about low average utilization rates, represents a missed opportunity to track agencies’ progress on this metric. In the absence of such a metric, OMB will be challenged in demonstrating agencies’ improvement in an area that was a driving force in starting FDCCI and which is critical to improving the efficiency, performance, and environmental footprint of federal data center activities. To better ensure that FDCCI improves governmental efficiency and achieves cost savings, we are making two recommendations to OMB. We recommend that the Director of OMB direct the Federal CIO to: utilize the existing PortfolioStat review sessions to assist HHS, Interior, Justice, Labor, GSA, and NASA in identifying data center consolidation cost savings opportunities; and as part of any future evaluation of the data center optimization metrics, develop and implement a metric for server utilization. We also recommend that the Secretaries of HHS, the Interior, Justice, and Labor, and the Administrators of GSA and NASA complete action plans for addressing their challenges in reporting cost savings, as discussed in this report. Finally, we recommend that the Secretaries of Agriculture, Commerce, Defense, Energy, the Interior, Transportation, the Treasury, and VA; the Administrators of EPA and NASA; and the Director of the Office of Personnel Management direct responsible officials to report all data center consolidation cost savings and avoidances to OMB in accordance with established guidance. We received comments on a draft of our report from OMB, the 15 agencies to which we made recommendations, and the other 9 agencies mentioned in the report. Specifically, OMB and 12 agencies agreed with our recommendations, 1 agency did not state whether it agreed or disagreed, 1 agency had no comments, and 1 agency—NASA—agreed with one of our recommendations but partially agreed with the other. The other 9 agencies had no specific comments on our recommendations. Multiple agencies also provided technical comments, which we incorporated as appropriate. Each of the agency’s comments are discussed in more detail below. In comments provided by e-mail on July 30, 2014, a policy analyst from OMB’s Office of E-Government and Information Technology stated that OMB agreed with the findings and recommendations of the report. OMB also provided technical comments, which we have incorporated as appropriate. In comments provided by e-mail, a liaison officer from Agriculture’s Office of the CIO stated that the department agreed with the report’s recommendation and noted steps planned to address the recommendation, including engaging with Agriculture agencies to collect actual cost savings and avoidance information realized through their internal consolidation efforts. In addition, the department noted that the Office of the CIO is drafting a Cloud Computing Departmental Directive that, in addition to other requirements, is expected to standardize the process by which IT investments are evaluated for cloud services, including projected and actual cost savings and avoidances. In written comments, Commerce’s Deputy Secretary stated that the department concurred with the general findings of the report as they applied to Commerce. The department did not state whether it agreed or disagreed with our recommendation, but noted that department would ensure that all savings and avoidances identified by its component bureaus are reported through OMB’s integrated data collection. Commerce’s written comments are provided in appendix II. Our draft report provided to Defense for comment included a recommendation that the department complete an action plan for addressing their challenges in reporting cost savings. This was based on the department withdrawing its original savings figures—totaling approximately $4.7 billion between fiscal years 2011 through 2017— reported earlier in our review, and submitting revised figures using a new methodology that did not result in planned cost savings estimates beyond fiscal year 2014. Subsequently, Defense provided additional documentation of its planned savings between fiscal years 2015 and 2017, which resulted in an updated total planned cost savings figure of approximately $2.6 billion between fiscal years 2011 and 2017. As a result of Defense’s action, we have removed this recommendation from our report. We have also made changes to the report to reflect these newly-reported numbers. However, in reviewing the additional cost savings information provided by the department, we found that Defense had not fully reported its fiscal years 2012 through 2015 cost savings to OMB, consistent with OMB guidance. As a result, we have added a recommendation for Defense to report all data center consolidation cost savings and avoidances to OMB in accordance with established guidance. In written comments, Defense’s Acting Principal Deputy CIO stated that the department agreed with the amended report and recommendation. Defense’s written comments are provided in appendix III. In written comments, Energy’s CIO stated that the department concurred with the report’s recommendation and noted steps being taken by the department to address the discrepancies in its reporting of estimated cost savings and avoidances. For example, the CIO stated that, in order to improve the accuracy and completeness of the data center cost savings and avoidance data, the Energy will clarify its guidance for the integrated data collection data call to better ensure that the department’s organizations report on all data center optimization and consolidation activities. Energy’s written comments are provided in appendix IV. In comments provided by e-mail on August 25, 2014, an official from HHS’s Division for Oversight and Investigations, Assistant Secretary for Legislation, stated that the department concurred with the report’s recommendation. In comments provided by e-mail on August 11, 2014, an Interior OIG/GAO Audit Liaison stated that the department concurred with the report’s findings and recommendations. In comments provided by e-mail on August 20, 2014, a Justice audit liaison stated that the department concurred with the report’s recommendation. In written comments, Labor’s Assistant Secretary for Administration and Management and CIO stated that the department concurred with the report’s recommendations. The department also provided technical comments, including stating that Labor’s data center inventory figures in our draft report were incorrect. Specifically, the department asserted that its total number of data centers was lower than the number cited in our report and that its number of closed data centers was higher. However, the department did not provide supporting documentation for these changes. We have incorporated Labor’s other technical comment related to its challenges in achieving cost reductions. Labor’s written comments are provided in appendix V. In written comments, Transportation’s Assistant Secretary for Administration stated that the department agreed with the recommendation related to reporting all of its data center consolidation cost savings and avoidances to OMB, but asserted that the department’s current reporting of this information satisfied our recommendation. While we acknowledge in our report that Transportation has reported a portion of its cost savings and avoidances to OMB, we identified discrepancies between that information and the cost savings and avoidance information that the department reported to us. As a result, we determined that Transportation’s savings and avoidances were not being fully reported to OMB. Therefore, we continue to believe that our recommendation remains valid. Transportation’s written comments are provided in appendix VI. In written comments, Treasury’s Acting Deputy Assistant Secretary for Information Systems and CIO stated that Treasury had no comments on the report. Treasury’s written comments are provided in appendix VII. In written comments, VA’s Chief of Staff stated that the department concurred with our recommendation to report all data center consolidation cost savings and avoidances to OMB, stating that it plans to begin reporting this information by the end of 2014, but strongly disagreed with our recommendation that OMB include server utilization in the FDCCI metrics. In our report, we acknowledge the reasons that the server utilization metric was not included when OMB issued the data center optimization metrics in May 2014, such as lack of agency data to calculate utilization and lack of utilization monitoring capabilities. However, because low server utilization rates were a driving force in launching FDCCI, we believe that tracking this metric can provide useful information in assessing agencies’ progress in optimizing their data centers. As previously mentioned, OMB agreed with our findings and recommendations related to this area. Accordingly, we continue to believe our recommendation remains valid. VA’s written comments are provided in appendix VIII. In written comments, EPA’s Acting Assistant Administrator and CIO stated that the agency agreed with our findings, conclusions, and recommendation, and noted processes in place to address the recommendation. EPA’s written comments are provided in appendix IX. In written comments, GSA’s Administrator stated that the agency agreed with the report’s findings and recommendation and would take appropriate actions to address the recommendation. GSA’s written comments are provided in appendix X. In written comments, NASA’s CIO stated that the agency concurred with one of two of our recommendations and partially concurred with the other. Specifically, NASA agreed with our recommendation related to reporting all of its data center consolidation cost savings and avoidances to OMB, stating that it would issue a directive by October 2014. The agency partially concurred with our recommendation to complete an action plan for addressing challenges in reporting cost savings. Specifically, NASA stated that, while it plans to develop and finalize revisions of existing action plans by December 2014, execution of those plans remains a challenge due to difficulties in power metering, particularly in older multipurpose buildings, and measuring facility savings. While we acknowledge the challenges described by NASA in our report, we believe that completing an action plan to address these challenges, as we recommended, could serve as a valuable tool in defining a road map toward overcoming these issues. We therefore continue to believe our recommendation remains valid. NASA’s written comments are provided in appendix XI. In written comments, OPM’s CIO stated that the agency concurred with our recommendation and described planned actions to address our recommendation. For example, the CIO stated that OPM is preparing its data center consolidation plan to include consideration for shared services and cloud technologies and that any related cost savings will be reported once the consolidation plan is implemented. OPM’s written comments are provided in appendix XII. In comments provided via e-mail on August 5, 2014, a policy analyst from Education’s Office of the Secretary/Executive Secretariat stated that the department had no comments on the report. In comments provided via e-mail on August 18, 2014, a program analyst from DHS’s Departmental GAO-OIG Liaison Office stated that the department had no technical comments on the report. In written comments, HUD’s CIO stated that the agency had no comments on the report. HUD’s written comments are provided in appendix XIII. In comments provided by e-mail on August 8, 2014, a senior management analyst from State’s Bureau of the Comptroller and Global Financial Services stated that the agency had no comments on the report. In written comments, NSF’s CIO stated that the agency had no comments on the report. NSF’s written comments are provided in appendix XIV. In comments provided via e-mail on August 19, 2014, an executive technical assistant from NRC’s Office of the Executive Director for Operations stated that the agency had no comments on the report. In comments provided via e-mail on August 12, 2014, the program manager for SBA’s Office of Congressional and Legislative Affairs stated that the agency had no comments on the report. In written comments, the Deputy Chief of Staff from SSA’s Office of the Commissioner stated that the agency had no comments on the report. SSA’s written comments are provided in appendix XV. In comments provided via e-mail on August 11, 2014, a systems accountant from USAID’s Office of the Chief Financial Officer, Audit, Performance and Compliance Division, stated that the agency had no comments on the report. We are sending copies of this report to interested congressional committees, the Director of OMB, the secretaries and agency heads of the departments and agencies addressed in this report, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions on the matters discussed in this report, please contact me at (202) 512-9286 or pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix XVI. Our objectives were to (1) evaluate the extent to which agencies have achieved cost savings to date and identified future savings through their consolidation efforts, (2) identify agencies’ notable consolidation successes and challenges in achieving cost savings, and (3) evaluate the extent to which data center optimization metrics have been established. To evaluate the extent to which agencies have achieved cost savings to date and identified future savings through their consolidation efforts, we obtained and analyzed cost savings and avoidance documentation, relative to requirements of the Office of Management and Budget’s (OMB) March 2013 memorandum, from the 24 departments and agencies (agencies) in our review. This documentation included, but was not limited to, agencies’ quarterly reports of cost savings and avoidances submitted to OMB, total cost of ownership models, contract and budget documentation, and internal agency status reports. To determine cost savings achieved to date, we totaled agency reported savings and avoidances from fiscal years 2011 through 2013, and to identify future planned savings we totaled agency projected savings and avoidances from fiscal years 2014 through 2017. We also compared agencies’ cost savings and avoidance information to key requirements for identifying and reporting data center consolidation cost savings and avoidances, as outlined in OMB’s March 2013 memorandum. To assess the reliability of agencies’ cost savings and avoidance data, we reviewed related documentation provided by agency data center program managers and other cognizant officials, such as agency total cost of ownership models, agency-developed spreadsheets, agencies’ quarterly data submissions to OMB, among other sources. We also compared the cost savings and avoidances reported to us by agencies with cost savings identified in OMB’s quarterly reports to Congress on the status of information technology reform efforts. In addition, we reviewed agency documentation for missing data or other errors (e.g., incorrect calculations). Finally, we interviewed agency officials to obtain additional supporting information regarding how their cost savings and avoidance figures were determined, the processes and methods to recalculate the figures, and the steps that the agency took to ensure the reliability of their figures and validate their figures. We also discussed with agency officials any discrepancies or potential errors identified during our review of their supporting documentation to determine the cause or request additional information. We determined that the data were sufficiently reliable to report on agencies’ cost savings achieved to date and identified future savings. However, as part of our reliability assessment, we identified issues with the reliability of OMB’s quarterly reports to Congress, including that agencies’ data center consolidation cost savings were not being fully reflected in OMB’s report. We have highlighted this issue in our report. Lastly, we reviewed agencies’ data center facility reductions as reported on http://data.gov and compared the information to agencies’ cost savings and avoidances achieved to date, taking into consideration the challenges in achieving savings identified by agencies. To assess the reliability of agencies’ data center reductions, we reviewed prior reporting of data center closures to check for anomalies in the data, such as fewer closures for agencies in more recent data sets than previously reported. We also checked for missing data, outliers, and other obvious errors, such as missing closure status information. Finally, we interviewed OMB staff from the Office of E-Government and Information Technology regarding actions taken to verify the data. We determined that the data were sufficiently reliable to report on agencies’ consolidation progress. To identify notable consolidation successes and challenges in achieving cost savings, we reviewed agencies’ cost savings documentation, including quarterly reports on cost savings and avoidances submitted to OMB, total cost of ownership models, contract and budget documentation, internal agency status reports, and other documentation, and interviewed agency officials. To determine the types of successes experienced, we identified areas reported in agencies’ documentation with directly attributable cost savings or avoidances. We also interviewed agency officials to identify additional successes in achieving cost savings, including areas where the agency may not have been able to quantify the savings. To determine challenges in achieving cost savings, we interviewed agency officials to obtain information regarding challenges faced, as well as to discuss any steps taken, or planned, to address the challenges identified. We then determined which successes and challenges were encountered most often. In some cases, agencies’ cost savings and avoidance data were used to highlight the impact of a particular success. As a result of the reliability assessment performed for our first objective, we determined these data to be sufficiently reliable for reporting on agencies’ cost savings and avoidances achieved to date and planned. To evaluate the extent to which data center optimization metrics have been established, we analyzed OMB’s March 2013 memorandum to determine OMB’s requirements for such metrics, including the responsibilities for completing the metrics and the key areas or categories that were to be addressed by the metrics. We then compared OMB’s requirements for the metrics to the final metrics, as documented in a May 2014 OMB memorandum. We also reviewed previous data center consolidation-related OMB memorandums and consolidation plan guidance to identify metrics that had previously been identified by OMB as indicators of data center optimization success and determined the extent to which the metrics addressed these areas. Finally, we interviewed relevant OMB, General Services Administration, and Data Center Consolidation Task Force officials to discuss the process by which the metrics were established and to determine the extent that related targets, or goals, for the metrics had been established. We conducted this performance audit from October 2013 to September 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, individuals making contributions to this report included Dave Hinchman (Assistant Director), Justin Booth, Rebecca Eyler, Brandon Sanders, and Jonathan Ticehurst.
In 2010, as focal point for information technology management across the government, OMB's Federal Chief Information Officer launched the Federal Data Center Consolidation Initiative to consolidate the growing number of centers. As of May 2014, agencies reported a total of 9,658 data centers—approximately 6,500 more than reported by OMB in 2011. GAO was asked to review federal agencies' continuing efforts to consolidate their data centers and achieve cost savings. The objectives were to (1) evaluate the extent to which agencies have achieved cost savings to date and identified future savings through their consolidation efforts, (2) identify agencies' notable consolidation successes and challenges in achieving cost savings, and (3) evaluate the extent to which data center optimization metrics have been established. GAO assessed agency-reported cost savings and avoidance documentation, interviewed agency officials, and assessed data center optimization metrics against prior OMB requirements and goals. Of the 24 agencies participating in the Federal Data Center Consolidation Initiative, 19 agencies collectively reported achieving an estimated $1.1 billion in cost savings and avoidances between fiscal years 2011 and 2013. Notably, the Departments of Defense, Homeland Security, and Treasury accounted for approximately $850 million (or 74 percent) of the total. In addition, 21 agencies collectively reported planning an additional $2.1 billion in cost savings and avoidances by the end of fiscal year 2015, for a total of approximately $3.3 billion—an amount that is about $300 million higher than the Office of Management and Budget's (OMB) original $3 billion goal. Between fiscal years 2011 and 2017, agencies reported planning a total of about $5.3 billion in cost savings and avoidances. However, planned savings may be higher because six agencies that reported having closed as many as 67 data centers reported limited or no savings, in part because they encountered difficulties, such as calculating baseline data center costs. In addition, 11 of the 21 agencies with planned cost savings are underreporting their fiscal years 2012 through 2015 figures to OMB by approximately $2.2 billion. While several agencies noted communication issues as the reason for this, others did not provide a reason. Until OMB assists agencies in reporting savings and agencies fully report their savings, the $5.3 billion in total savings will be understated. Most agencies reported successes in achieving cost savings—notably, the benefits of key technologies, reduced power consumption and facility costs, and improvements in asset inventories. However, agencies also reported challenges, many of which were the same as GAO found in 2012. One of the most-reported challenges was related to obtaining power usage information as a means to determine cost savings. In light of how closely these successes and challenges relate to achieving cost savings, it is important for OMB to continue to provide leadership and guidance, including—as GAO previously recommended—using the Data Center Consolidation Task Force to monitor agencies' efforts. Pursuant to OMB guidance, in May 2014 the Data Center Consolidation Task Force completed a set of 11 metrics to measure agency progress toward optimizing their data centers, such as power usage and facility utilization. In addition, related targets to be achieved by fiscal year 2015 have been developed for nearly all the metrics. However, the metrics do not address server utilization, even though OMB reported this to be as low as 5 percent in 2009, which is significantly below OMB's target of 60 to 70 percent. Without such a metric, OMB may not be getting important insight into agencies' progress on a key issue that was a driving factor in launching the consolidation initiative. GAO is recommending that OMB assist agencies in reporting cost savings and develop a metric for server utilization as part of any reevaluation of the metrics. GAO is also recommending, among other things, that agencies fully report their consolidation cost savings. OMB and 12 agencies agreed, 1 did not state whether it agreed or disagreed, 1 had no comments, and 1 partially agreed noting challenges. GAO continues to believe the recommendation remains valid as discussed in the report.
DOD’s primary medical mission is to maintain the health of 1.6 million active duty service personnel and provide health care during military operations. Also, as an employer, DOD offers health care to 6.6 million other military-related beneficiaries, including dependents of active duty personnel and military retirees and their dependents. Most care is provided in about 115 hospitals and 470 clinics—referred to as military treatment facilities, or MTFs—worldwide, operated by the Army, Navy, and Air Force. DOD’s direct care system is supplemented by care paid for by DOD but provided in civilian facilities. In fiscal year 1997, DOD expects to spend about $12 billion providing care directly and about $3.5 billion for care in civilian facilities. In response to increasing health care costs and uneven access to care, in the late 1980s, DOD initiated, under congressional authority, a series of demonstration programs to evaluate alternative health care delivery approaches. On the basis of this experience, DOD designed TRICARE as its managed health care program. The TRICARE program uses regional managed care support contracts to augment its MTFs’ capacities by having contractors perform some managed care functions, including arranging civilian sector care. Altogether, seven managed care support contracts will be awarded covering 11 TRICARE regions (see app. II). To coordinate MTF and contractor services and monitor care delivery, each region is headed by a joint-service administrative organization called a “lead agent.” Thus far, DOD has awarded five contracts to three health care companies covering eight TRICARE regions. The contracts are competitively awarded and fixed price, although the price is subject to specified adjustments for changes in beneficiary population, MTF workload, and other factors beyond the contractor’s control. DOD officials believe that care provided to its patients at military facilities is less expensive than such patients’ care at civilian facilities. Resource sharing arrangements are designed to permit DOD and the contractor to share contractor-provided personnel, equipment, supplies, and other items in an effort to maximize savings. To identify resource sharing opportunities, contractors analyze such data as historical health care costs, workload, and care use, and visit military facilities. They then project the expected savings from providing care in military facilities rather than in potentially more expensive civilian settings. The contract is designed so that the contractor’s expected savings over the contract’s life from the resource sharing are deducted from the contractor’s final offer when bidding on a contract. The contract price thus reflects such anticipated savings through shared resources. The contract also is subject to a risk-sharing arrangement under which the government and the contractor share responsibility for health costs that overrun the contract price. Contractors are at risk for their bid amount of health care profit plus up to 1 percent of the bid health care price. Beyond that, the contractor and the government share in losses until an amount prepledged by the contractor, called “contractor equity,” is depleted. At that time the government becomes fully responsible for any further losses. Thus, DOD’s initially realized savings in the form of a lower contract price could be reduced or lost if actual health care expenses are higher than anticipated. Accordingly, DOD encourages MTFs to help the contractor achieve projected resource sharing volume and savings. Resource sharing savings, along with expected savings from other sources, such as negotiated provider discounts; better health care utilization management; and better claims management, including collections from other health insurance plans, contribute to government and contractors’ overall financial gains. The combined expected savings from resource sharing and other sources are important as offsets to the increased costs of managing care under TRICARE. Also, statutorily, TRICARE costs cannot be greater than the health costs DOD otherwise would have incurred under CHAMPUS and the direct care system in the program’s absence (National Defense Authorization Acts for Fiscal Years 1994 and 1996, P.L. 103-160 and P.L. 104-106, 10 U.S.C. 1073 note). We reported last year that lack of resource sharing progress was one area that could impair efforts to contain related TRICARE costs and achieve savings. We reported that resource sharing was a complex and difficult process and that the process’ details were not well developed or understood, including uncertainty about how resource sharing agreements may affect contract price adjustments. DOD and the contractors are not attaining major new savings through resource sharing agreements, and the potential for new agreements and further savings appears limited. On the basis of progress to date and discussions with DOD and contractor officials, achieving overall projected resource sharing savings appears highly unlikely. For the contracts under way, DOD projected saving about $700 million, including $116 million through the current operating years. The contractors’ projections were similar. But by March 1997, after 9- to 24-month contract operating periods, new resource sharing agreements represented only about 5 percent of the savings needed to achieve DOD’s projected savings. In addition to the new agreements, contractors have also converted into resource sharing agreements previously existing agreements that MTFs had with civilian providers before TRICARE became operational. At one MTF, for example, on the day the support contract became operational, seven existing agreements were converted to resource sharing agreements. But savings associated with those converted agreements do not represent new TRICARE savings and thus were not part of DOD’s new projected savings. Support contractors and DOD are aware of the lack of progress in resource sharing. One contractor’s representative told us that achievements so far are just previous agreement conversions and that a more aggressive approach toward new agreements is needed. Another said lack of progress in negotiating new agreements remains their greatest TRICARE contract concern. DOD officials expressed mixed views ranging from optimism that resource sharing momentum will build to the belief that the approach simply will not work as envisioned. At this time, the potential for further resource sharing savings appears limited. In March 1997, the contractors had about 170 new resource sharing possibilities in some stage of cost and workload data gathering or analysis, or in some way being considered as potential agreements. For example, one region had 39 resource sharing possibilities under development, covering an array of services such as cardiology, radiology, and internal medicine. Officials told us, however, that considerable analysis was needed before potential savings could be reliably estimated and that some of the proposals likely would not prove cost-effective. Meanwhile, additional proposals are being added and existing ones deleted as the proposal and evaluation processes continue. But, as previously indicated, savings to date show that not enough is being done to reach DOD’s projected resource sharing savings levels. In addition to agreements already implemented or under consideration, by March 1997, over 260 other resource sharing proposals had been either rejected or otherwise not further pursued. Our analysis indicated that various impediments exist to resource sharing, including lack of clear policies, program complexity, lack of MTF incentives, and military downsizing. Issued in December 1994, DOD resource sharing guidance stated that MTFs had an obligation to help contractors reach the bid amount of resource sharing savings. But the guidance also instructed MTFs to look for other, possibly more cost-effective ways to increase MTF resource use, such as by reallocating existing resources, referring patients to other MTFs, or directly contracting with civilian care providers other than the support contractor. When some MTFs pursued such alternatives, one contractor objected, stating its belief that resource sharing was the first alternative for increasing MTF use. In November 1996, DOD issued new guidance stating that resource sharing was the first alternative and that MTF commanders should make good faith efforts to work with contractors to execute such agreements. MTF and contractor officials cited the resource sharing approach’s complexity as another factor limiting progress. The agreements require considerable financial analysis to assess their cost-effectiveness potential (see app. IV). Also, the agreements involve intricate issues of how much credit contractors should receive for adding to the MTFs’ workload and how that credited workload will affect the contract price. MTF officials told us they did not understand all of the agreements’ financial implications, largely because they did not control or understand all the data and analyses used. They were concerned that workload shifts between MTFs and contractors, and ensuing bid price adjustments, would enable contractors to gain at MTFs’ workload and budgetary expense. At two MTFs, for example, proposed gastroenterology assistance agreements, projected to save over $400,000, were rejected because of unresolvable MTF concerns about possible effects on the overall contract price. Both contractor and MTF officials expressed concerns—and resulting hesitance to enter into agreements—about the reliability of data used to analyze agreements’ potential cost-effectiveness. According to contractors, for example, several MTFs supplied inappropriate data, such as personnel salaries and hospital maintenance, that hampered their analyses of the proposals’ likely costs and other effects. At one of the MTFs, eight proposed agreements were rejected because of data problems. Tied to the complexity and data problems, a lack of incentive to enter into agreements because MTFs do not share in resulting savings was also cited by MTF officials. DOD and the Services have not established a savings return policy for MTFs that have resource sharing agreements. Instead, after consideration, the Services decided that any such savings are to be retained at the Service level for reallocation as needed within the system. Still another MTF resource sharing disincentive is that the agreements can actually increase facility costs. For example, an agreement to provide an anesthesiologist, so the MTF can do more surgeries, will in turn result in related radiology, laboratory, and pharmacy costs. Unless contractors compensate MTFs for such costs, MTFs’ overall costs may increase. While the contracts provide for such contractor compensatory payments, a July 1996 DOD policy clarification was issued to help facilitate such payments. A remaining challenge has been MTF and contractor negotiations on what costs to apply to individual agreements. Both DOD and the contractors cited military downsizing, including at the MTFs, as another limiting factor. Resource sharing opportunities identified during the contract bidding process may no longer exist as military forces are reduced or relocated and as MTFs are closed, downsized, or converted to clinics. For example, one MTF rejected five proposals because it had subsequently reduced its operating rooms from eight to four, thus obviating the need for agreements. Resource sharing problems have prompted one contractor to request a contract price adjustment. In June 1996, near the start date of health care delivery, the contractor reported that while the other care delivery preparations had progressed well, the lack of resource sharing progress was a major problem. Projecting millions of dollars in financial losses, the contractor requested a price renegotiation. In a letter to DOD, the contractor complained about changing DOD rules on how the approach was to work, inadequate data, improper MTF incentives, insufficient MTF training in developing agreements, and postaward MTF workload and capacity changes that reduced resource sharing opportunities. DOD generally agreed that problems existed, committed to work collaboratively to resolve them, and scheduled meetings with the contractor to pursue the issues in more detail. DOD said, however, that a price renegotiation was premature at the time. As of May 1997, the contractor was still pursuing a price adjustment. DOD has acted to increase resource sharing under current contracts. For the latest two contracts, soon to be awarded, DOD will be applying an alternative approach, referred to as “revised financing,” that relies less on resource sharing for savings but adds other challenges. For the future, DOD is planning far broader changes in MTF budgeting and support contracting, which are expected to further reduce reliance on resource sharing. DOD has worked to facilitate resource sharing through policy issuances and provision of analytical tools. Since issuing resource sharing guidance in December 1994, DOD headquarters officials visited the regions to provide briefings, used a focus group to help make resource sharing easier to use, developed standardized training, and attempted to promote better DOD and contractor cooperation. Also, the contractors have continued to work with the MTFs to identify and pursue resource sharing opportunities. In November 1996, DOD issued clarifying policy stating that resource sharing is to be the first alternative for recapturing private sector workload into the MTF. Lead agents and MTFs are to ensure that any other MTF actions to add or retain workload do not prevent the TRICARE support contractor from entering into cost-effective agreements and reaching their resource sharing bid amounts. In July 1996, DOD clarified its policy regarding cash payments by support contractors to MTFs for marginal costs stemming from agreements. In a related move, DOD recently made available $25 million to the Services to help pay such marginal costs, or for the MTFs to otherwise invest in agreements, and asked the Services to submit potential projects for the funds’ use. In April 1997, DOD told us that some funds had been approved for only two or three requests. To help reduce resource sharing complexities, DOD provided a financial analysis worksheet for determining whether an agreement might be cost-effective and whether the amount of recaptured workload credited to the contractor is appropriate (see app. IV). DOD later revised the worksheet to, among other things, account for different agreement types. DOD also provided an analytical model further showing the MTFs’ resource sharing’s potential financial effects. The model was introduced to the MTFs in July 1996. DOD created a resource sharing focus group after a lead agent reported in January 1996 that resource sharing was complicated and presented MTFs with disincentives. The group worked for about 6 months and recommended improvements in such areas as training, the financial analysis worksheet, and the data used to make agreements. In early 1996, DOD began developing a TRICARE Financial Management Education Program curriculum that included resource sharing and the bid price adjustment process. Program testing was completed in December and presentations have begun. In November 1996, DOD initiated a new “partnering” effort with the contractors. DOD saw a need to help MTFs and contractors work through data problems, contract ambiguities, resource constraints, and other TRICARE difficulties. The partnering approach calls for a more cooperative, trusting, teamwork relationship between MTFs and support contractors, including ways to avoid disputes and to informally resolve, rather than possibly litigate, those that occur. Early actions included DOD meetings with contractors at headquarters and regional levels, contractor participation in a national TRICARE conference, and consideration of assigning representatives of lead agents and the contractors to work together at each other’s locations. The bottom-line measure of DOD’s and the contractors’ efforts is in the progress made entering new resource sharing agreements. But progress remains slow, and the prospects for additional agreements are questionable. These outcomes, along with one contractor’s request for financial relief and DOD’s recognized need to improve teamwork, indicate a need for more concerted efforts under the current contracts to reach the agreements that are pending while seeking acceptable alternatives to resource sharing. DOD’s revised financing approach, conceived before the first support contract began operating but applied only in the latest two, is intended to strengthen MTF health care management. Under this approach, MTFs’ direct funding and financial responsibilities will be increased. The funding increase will be determined by the amount of previous CHAMPUS expenditures for MTF-based TRICARE Prime enrollees, which DOD expects will include most MTF service areas’ beneficiaries. Thus, rather than sharing responsibility for Prime enrollees with the support contractor, the MTFs will have full funding and full responsibility for their Prime enrollees and will pay the contractor for care required from the contractor’s network. One result of this approach will be to reduce reliance on resource sharing to lower support contract costs; but it also adds new challenges and does not eliminate, and may even exacerbate, resource sharing problems. Giving the MTFs direct financial control for TRICARE Prime enrollees is aimed at providing them with clearer incentives to efficiently manage care use and to behave more like private sector HMOs. DOD saw the need for this while still arranging the earlier contracts and later viewed it as a way to relieve emerging resource sharing problems. But, under revised financing’s current approach, DOD will continue sharing care costs with the contractor for beneficiaries not enrolled with the MTFs. Also, the MTFs will continue working with the new contractor toward signing resource sharing agreements. Thus, to the extent contractor reliance on resource sharing continues, the difficulties already experienced are also likely to continue. DOD believes revised financing gives MTFs added cost-saving incentives to engage in resource sharing by reducing the need for referral of their enrollees to the TRICARE support contractor. However, revised financing may add further complexity to resource sharing’s use. Because the new approach’s potential effects on resource sharing are not now known, TRICARE contract offerors must make their own assumptions and projections about such effects. Much will depend, for example, on how MTFs’ funding levels may change and the consequent alterations in their beneficiary service priorities. And the added extent of funding going to MTFs rather than to contractors will in turn depend on the MTFs’ capacities and ability to enroll beneficiaries and serve as their primary care manager—all of which have yet to be determined. Revised financing’s effects on resource sharing are uncertain and were at issue during the two affected contracts’ bidding processes. One bidder, a current TRICARE contractor, wrote to DOD to clarify what portion of the funds the MTFs and contractor respectively would control and how revised financing would affect resource sharing. In earlier discussions, the bidder told DOD the company could be creative and assume resource sharing opportunities would still exist or assume none would exist. DOD replied that the new approach’s effects on resource sharing were uncertain but that the successful bidder should work creatively with the MTFs to achieve resource sharing. DOD also amended the request for a bid proposal to provide more description and examples of how revised financing and resource sharing might be integrated. But, as with resource sharing under the current contracts, the new approach’s actual effects will not be known until it is implemented. While DOD officials in regions with contracts generally favored revised financing, they expressed concerns about poor accounting systems and lack of data on patient care costs and outcomes that MTFs will need to become effective, cost-competitive providers. Some had concerns about the general lack of MTF health care management experience and control over their staffing. MTF officials in regions about to apply revised financing have stated that they recognize their increased need for accountability, adequate staffing to support their enrollees, and better information systems to support resource sharing decisions. While theoretically possible, revised financing’s potential has yet to be demonstrated. Also, while revised financing reduces reliance on resource sharing, it does not eliminate or necessarily alleviate resource sharing problems and may exacerbate such problems under the new contracts. For the future, DOD plans other changes to simplify TRICARE contracting and MTF budgeting. The changes would incorporate revised financing and further reduce reliance on resource sharing but also would have far broader implications for current and future contracts. Adding to such TRICARE initiatives’ challenges are changes in DOD’s top leadership in Health Affairs. DOD is now considering alternative structures for future contracts, on the basis of our recommendations and those from lead agents, contractors, and others in the health care industry. The alternatives include smaller, shorter, and less prescriptive contracts, allowing contractors to rely more on their own “off-the-shelf” commercial practices. DOD has held several forums to discuss ideas and the alternative approaches’ potential advantages and disadvantages. The issues involved include effects on beneficiary choice of providers, assurance of contractor qualifications, quality of care, DOD and contractors’ risk sharing, administrative complexity, adequacy of bid competition, and DOD costs. No final decisions have been made yet. The new contract structures likely will include an approach similar to revised financing. Basically, each MTF would be funded to cover all its enrollees in TRICARE Prime, and the contractor would be funded for all other beneficiaries. Thus, each MTF and contractor would be responsible for its share of the beneficiary population’s care costs, and would reimburse each other when one provides services to the other’s beneficiaries. For example, the contractor would reimburse an MTF for caring for a nonenrollee, and one MTF would reimburse another upon referring its own enrollee for care there. One aim of the funding approach would be to eliminate reliance on resource sharing as a major source for TRICARE savings. In April 1997, DOD accelerated the planned change in MTF budgeting and contract financing and announced it would be effective at the start of fiscal year 1998. This means that not only will the changes apply to future contracts but also current contracts will have to be amended. DOD expects that changing the current contracts may have cost implications of unknown extent at this time for both the government and the contractors. Commenting on a February 1997 DOD policy draft, one contractor said that any change that would avoid reliance on resource sharing, bid price adjustments, and resulting MTF disincentives would be positive. The contractor added, however, that DOD needs to involve the contractors in weighing the new budgeting and financing approach’s assumptions and risks to ensure it will work; otherwise contract prices may increase to cover the unknown risks. Another contractor said that many of the details had yet to be worked out and that two remaining questions are how funding will be split between MTFs and contractors and how resource sharing will be affected. Such budgeting and contracting changes reach far beyond an expectation that they will reduce the need for resource sharing. This notwithstanding, DOD lacks a simple, stable, long-term approach to TRICARE budgeting and contracting that provides clear managed care incentives and accountability and avoids the complexities and disincentives of resource sharing. As the contractors indicated, whether the contemplated system changes succeed will depend upon how these details are worked out and how well DOD and the contractors manage the system and support each other. In addition, both the Assistant Secretary of Defense (Health Affairs) and the Principal Deputy Assistant Secretary, who have actively and forcefully led TRICARE since its beginning, have left their positions. The former Principal Deputy has taken the Assistant Secretary position in an acting capacity. The Principal Deputy position has been filled, but to date no successor to the Assistant Secretary has been nominated. These top DOD leadership changes may add to the challenge of successfully reducing reliance on resource sharing and adopting broader budgeting and contracting changes. DOD officials acknowledged that resource sharing has not achieved the expected savings, but told us that lower than expected contract award amounts have led to more than $2 billion in other savings. They explained that the contract award amounts consistently have underrun DOD’s projections, required before each contract is awarded, of what CHAMPUS costs would be over the contracts’ lives. As an example, one region’s estimated CHAMPUS costs without the contract would have been about $2.1 billion, compared with the contract award amount of $1.8 billion; so, according to DOD, the savings would be $0.3 billion. These officials also said that overall health care data show downward MTF cost trends, further supporting managed care’s cost-saving effects—despite resource sharing’s limited showing. For example, they provided a graph showing that both direct care and CHAMPUS total costs declined steadily—by 10 percent overall—from fiscal years 1991 through 1996. While assessing TRICARE’s overall cost-effectiveness was beyond our review’s scope, there are reasons at this time to question the currency and analytical completeness of DOD’s savings claims. First, DOD’s preaward estimates of CHAMPUS costs, a key component of its savings claim, may now be outdated. The first estimate—for the Northwest Region contract—was based on cost data prior to August 1993. Over the 4 years since then, changes in such areas as benefits and allowed payments to providers would affect the results of that estimate. Second, in a separate review, we found that as of May 1997, the existing five contracts had been modified as many as 350 times, with the resulting potential for substantial contract cost increases attributable to TRICARE. These potential cost increases, just like the potential losses from lack of resource sharing, also would offset DOD’s projected savings. Furthermore, we recently questioned DOD’s cumulative 5- to 7-percent utilization management savings estimate in its near $15 billion to $18 billion health care budget totals for fiscal years 1998 to 2003. We reported that DOD lacked a formal methodology for developing the estimates, and we concluded overall that future health care costs likely would be greater. Lastly, DOD’s available health care cost data do not indicate whether apparent downward shifts might be due to managed care effectiveness or to such other factors as reductions in allowed provider payments that would have occurred in TRICARE’s absence. Thus, we support DOD’s plans to undertake a more current and complete cost analysis of MTF direct and contractor-provided care, based on recent program data, to bottom-line TRICARE’s current and future-year cost-effectiveness. At their present results levels, for the existing contracts, DOD and the support contractor will achieve only about 5 percent of the expected $700 million in new savings, potentially causing shared financial losses and higher TRICARE costs. Progress in achieving new agreements is slow, and neither DOD nor the contractors know what resource sharing potential remains under these contracts. While DOD now seems to be moving toward a view that the approach will not work as designed, the contractors and DOD are still pursuing about 170 resource sharing possibilities in an effort to discover additional savings with which to reduce their costs. Many problems have contributed to resource sharing’s lack of success. DOD’s policies, processes, and tools for use at the local level as well as the degree of DOD and contractor collaboration have not yet been sufficient to effectively resolve the approach’s obstacles. While revised financing is feasible though unproven, its potential effects on resource sharing and on other expected savings under the latest two contracts remain to be seen. Under the new approach, resource sharing may be reduced, but its problems will remain and may become more complex as new MTF and contractor management responsibilities are introduced. DOD’s more broadly proposed MTF budgeting and support contracting changes would greatly affect future and current contracts, including further reducing resource sharing. Clearly, a simple, accountable, incentive-based approach is lacking, yet the potential effectiveness of DOD’s considered changes will largely depend on how well they are designed and implemented. As such changes further reduce resource sharing as a potential savings mechanism and as DOD looks to alternative savings sources, lessons learned from resource sharing will need to be carefully heeded and skillfully incorporated. Carrying such lessons forward may be particularly challenging as DOD changes the top leadership in Health Affairs. DOD officials acknowledged that resource sharing has not, and likely will not, produce the projected savings, but contended that TRICARE’s managed care approach has produced offsetting savings in other ways. We question, however, the currency and analytical completeness of these claims and thus believe it is important that DOD proceed with its plans to reestimate TRICARE costs versus projected costs without TRICARE. We recommend that the Secretary of Defense direct the Assistant Secretary of Defense (Health Affairs) to determine whether any further resource sharing savings remain under the current contracts and, as appropriate, consummate promising agreements while seeking other mutually acceptable alternatives to resource sharing; determine, to the extent the new contracts with revised financing use resource sharing, whether any such agreements are available and, as appropriate, enter promising agreements while seeking effective alternatives to resource sharing; and incorporate, while planning for and implementing the next wave of MTF financing and contract management initiatives, such resource sharing lessons learned as the need for coherent, timely policies; clearly understood procedures; mutually beneficial incentives; and effective collaboration. DOD agreed with our recommendations and said, without elaborating, it had already implemented each of them. Nevertheless, while agreeing with the recommendations, DOD disagreed with the way we presented certain issues. DOD said, for example, that the report does not note the tremendous resource sharing success during the “CHAMPUS Reform Initiative” (CRI) in California and Hawaii (which preceded TRICARE) and does not note the continued success in region 9 (Southern California). Thus, DOD said the reader is led to assume that problems occurred in other regions because resource sharing was implemented on a broad scale without the requisite examination. We did not evaluate CRI resource sharing because our focus was whether resource sharing under TRICARE was producing new savings to help offset added TRICARE costs. Also, as the report notes, DOD’s reference to continued success in region 9 is basically a conversion of CRI resource sharing agreements, which do not reflect new savings under TRICARE. Furthermore, DOD said the resource sharing program as currently structured was based on the best available information at the time and that the report should note that the TRICARE support contractors came to the same conclusion as DOD regarding resource sharing’s potential cost-effectiveness, even with their years of experience with managed care. But our report does not question whether DOD’s structure for TRICARE resource sharing was based on the best information available at the time. Instead, the report discusses the complex issues that arose during the implementation of resource sharing. Also, the report notes that the contractors as well initially concluded that resource sharing would be cost-effective. Also, DOD said the report treats resource sharing in isolation, as opposed to one component of a comprehensive system that has proven to be cost-effective. While we focused on resource sharing because it was expected to be a major cost-saving mechanism, we also noted that it was one of several ways in which DOD expected to achieve savings to offset TRICARE’s costs. DOD went on to state that efficiencies not achieved through resource sharing were otherwise achieved by increased MTF capability and efficiency brought about by TRICARE. As the report points out, during our review DOD presented information showing downward MTF cost trends, but these data do not show whether the trends were due to TRICARE managed care efforts or whether the costs would have declined anyway in TRICARE’s absence. DOD said managed care support (MCS) contracts have resulted in savings of $2.3 billion when compared with projected costs without the contracts. It said that we acknowledged this savings estimate but that our placement of it in the report diluted its significance. While a detailed review of overall TRICARE savings was beyond the scope of our review, as our report states, we question that savings estimate’s currency and analytical completeness, and we support DOD’s plans to undertake more current and complete analysis of TRICARE’s cost-effectiveness. We have revised the report to discuss DOD’s overall savings estimate in a separate section. DOD took issue with the report statement that, while revised financing reduces reliance on resource sharing, it does not eliminate or necessarily alleviate resource sharing problems and may exacerbate such problems under the new contracts. DOD said revised financing, in conjunction with its planned change to enrollment-based capitated budgeting for MTFs, increases incentives for MTFs to engage in resource sharing by expanding MTF funding while reducing support contractor costs. We agree that revised financing, in conjunction with enrollment-based capitation, has the potential to create more incentive for the MTFs to engage in resource sharing and may similarly provide incentive to the support contractors. Still, those approaches add their own complexities and do not automatically eliminate the difficulties experienced with resource sharing. As we said in the report, the approaches are still being defined and are yet to be tested. Nonetheless, we revised the relevant text to better recognize DOD’s views on revised financing’s potential. DOD’s comments in their entirety are included as appendix V. We also obtained comments from the three current TRICARE support contractors. All expressed general agreement with the report’s overall content and completeness of subject coverage. In its comments, one contractor also offered a minor technical comment about lack of clarity in a statement defining limits on resource sharing agreement profits, which is part of the procedural description in appendix IV. The contractor pointed out, however, that there is no misunderstanding between it and DOD as to what is intended. We made no change because the appendix was presented to illustrate DOD’s guidance as it was offered. A second contractor expressed concern about its limited progress in resource sharing and about the problems and lack of success in resource sharing elsewhere, as conveyed in our report, and expressed hope that the report would help bring about favorable resolution of the problems. While stating that the report otherwise accurately portrays the resource sharing situation, the third contractor disagreed with the report’s statement that the prospects for additional resource sharing agreements are questionable. The contractor informed us that it had recently made a presentation to DOD on resource sharing shortfalls, but it also asserted that, with the right incentives and education at the MTF commander level, resource sharing is still an extremely viable program with current savings opportunities. On the basis of our analysis of the problems and overall limited resource sharing progress, the prospects for reaching new agreements seem to us to be limited. Still, the report urges DOD to identify and pursue promising resource sharing opportunities while also seeking other mutually acceptable alternatives to resource sharing. We are sending copies of this report to the Secretary of Defense and interested congressional committees, and will make copies available to others upon request. Please contact me at (202) 512-7111 or Dan Brier, Assistant Director, at (202) 512-6803 if you or your staff have any questions concerning this report. Other major contributors are Elkins Cox, Evaluator-in-Charge; Allan Richardson; Beverly Brooks-Hall; and Sylvia Jones. To assess the Department of Defense’s (DOD) experiences with resource sharing, we visited 5 (of the 7) regions where TRICARE support contractors had begun delivering health care and 11 military treatment facilities (MTF) within those regions. We also met with the two civilian TRICARE contractors that were providing health care support to the MTFs. A third contractor began providing health care on April 1, 1997, in two other regions (since combined into one region), but because of the newness of the operations, we met with this contractor briefly but did not include it in our detailed assessment of resource sharing progress and problems. Two other contracts, covering the remaining three regions, were still pending at the time of our review. We reviewed DOD and contractor projections of resource sharing costs and savings, TRICARE policies and guidance, and various efforts by DOD to promote the overall resource sharing effort. This included discussions with officials of the Office of the Assistant Secretary of Defense for Health Affairs, DOD’s TRICARE cost consultant, and contractor officials. At the contractors’ offices, we reviewed individual resource sharing project files to analyze the progress being made and determine the specific reasons why some potential agreements were not being implemented. The project files consisted of both agreements existing before TRICARE, referred to as “partnerships,” and new resource sharing agreements. Many of the partnership agreements were converted to resource sharing agreements as TRICARE became operational. To assess progress in achieving new savings under TRICARE, we identified the expected savings from the new agreements and compared the result to DOD’s overall projected TRICARE savings. We discussed information, training, and other needs with DOD officials at DOD’s Washington, D.C., headquarters and at regional and MTF levels, focusing on the factors that affected progress in resource sharing. Especially at the MTF level, we discussed officials’ understanding of, and amount of confidence in, the financial aspects of resource sharing agreements, including effects on the MTF workload and bid price adjustment. Through discussions with DOD and contractor officials and examination of records, we reviewed their experiences with planning and establishing resource sharing agreements, including the problems they encountered. We also discussed with DOD and contractor officials alternatives DOD has undertaken for the current contracts as well as policies and plans DOD has devised or is considering that will affect the future of resource sharing. At the completion of our work, we briefly reviewed DOD-provided data suggesting that TRICARE savings other than from resource sharing were occurring that more than offset the resource sharing savings shortfalls we had found. Determining TRICARE’s overall cost-effectiveness was beyond the scope of our review. Nonetheless, upon reviewing the data, we asked follow-up questions of DOD, obtained status information on DOD’s planned and under way internal and contracted studies aimed in whole or in part at determining TRICARE’s cost-effectiveness, and reviewed pertinent information from our other work in process and our issued reports. We conducted our review between June 1996 and May 1997 in accordance with generally accepted government auditing standards. To further explain the resource sharing agreement development process, the following information was condensed from selected guidance offered by lead agents. The guidance includes preparation of proposals, a chart showing the flow of agreement development (fig. IV.1), and application of a financial analysis worksheet. The Resource Sharing Program is a mechanism for providing contracted civilian health care personnel, equipment, and/or supplies to enhance the capabilities of MTFs to provide necessary inpatient and outpatient care to beneficiaries of the Civilian Health and Medical Program of the Uniformed Services (CHAMPUS). Resource sharing is a cooperative activity between the contractor, the lead agent, and the MTF commander. A variety of information sources and databases may be examined in looking for and evaluating resource sharing opportunities that may subsequently be developed into resource sharing proposals and agreements. Analysis of CHAMPUS utilization and cost data may identify diagnoses, procedures, or specialty health care, which account for significant numbers of patient encounters or high costs. A variety of reports may be useful in this regard. CHAMPUS Cost and Utilization Reports. These reports are generated by the Office of the Civilian Health and Medical Program of the Uniformed Services from health care service record data to show CHAMPUS costs and utilization, by type of health care service, for each catchment area. Those services showing high costs and/or utilization may be excellent candidates for resource sharing considerations. Non-Availability Statement (NAS) Reports. NASs authorize beneficiaries to seek certain care in civilian facilities when the MTF cannot provide the care. These reports show the numbers and types of NASs generated by each MTF. Those health care services showing large numbers of NASs being issued over time may be excellent candidates for resource sharing consideration. Health Care Finder (HCF) Referral Reports. These reports show the numbers and types of referrals of CHAMPUS-eligible beneficiaries to both MTF and civilian health care providers. High numbers of referrals to civilian providers for specific health care services may indicate resource sharing opportunities. CHAMPUS Ad Hoc Claims Reports. CHAMPUS historical data may be obtained from claims data files. These data can be tailored to provide greater detail for the types of services being provided under CHAMPUS. Information from CHAMPUS Cost and Utilization Reports, NAS Reports, and HCF Referral Reports may indicate those health care specialties that warrant more detailed examination to identify potential resource sharing opportunities. MTF Capability Reports. These reports are developed by HCFs and indicate MTF capabilities. They are used by the HCFs to guide referrals into and out of MTFs. They may also provide insight into potential resource sharing opportunities. Composite Health Care System Professional Activity Study Reports. These reports may be used to identify gaps in MTF services or high referral patterns from the MTF to outside health care providers. These gaps and referral patterns may indicate additional opportunities for resource sharing. Network Provider Directory. This directory provides the numbers and types of health care providers by location. Gaps and shortages in the civilian provider network may be identified that may indicate resource sharing opportunities. MTF capabilities, staffing, workload, and backlog—both current and projected—should be identified and evaluated to determine potential opportunities for resource sharing. MTF capabilities may be assessed using the following reports: MTF Capability Reports. (See prior description of reports.) MTF Staffing Reports. These reports are developed by MTFs and show the numbers and types of personnel assigned to, and employed by, the MTF. Careful review of staffing reports over time may indicate staffing trends, which may provide insight into both current and future resource sharing opportunities. A baseline report of regionwide staffing, by MTF, was compiled from computer tapes provided by the government to the contractor for fiscal year 1993. MTF Operations Study. This report shows the historical number of health care services provided by MTFs for both inpatient and outpatient services. This report is derived from data compiled on a computer tape provided to the contractor by the government for fiscal year 1993. This information can be used to identify both current and future opportunities for resource sharing. Potential Resource Sharing Opportunities List. This list, developed during site visits at each MTF, provides resource sharing opportunities that had been identified by the MTF, after examining the demand for services and identifying shortfalls in meeting those demands. Once a resource sharing opportunity has been identified, the MTF completes a written request for consideration of the potential resource sharing agreement (RSA). The proposal is to show the project title, requesting MTF, point of contact, and desired start date. The expected accomplishment is to be described. For example, “This project is intended to expand Family Practice services within the hospital. This MTF currently averages 200 ambulatory care visits a month, and the implementation of this project should increase the monthly visits by an additional 200 visits. This should decrease the number of NASs issued and the concomitant CHAMPUS visits and costs.” The proposal is to include the estimated resources required, including personnel, equipment, and supplies, along with the following: Direct Workload. Provide the number of outpatient visits and/or inpatient admissions, by type of CHAMPUS beneficiary (active duty dependent [ADD], or nonactive duty dependent [NADD]) that the project is expected to provide per year. Note that the NADD category includes retirees, family members of retirees, survivors of deceased service members, and others. If possible, provide a detailed breakdown of workload numbers by current procedural terminology (CPT) or diagnosis-related group (DRG). If possible, provide the estimated cost to the MTF for each CPT and DRG code. Ancillary Workload. Provide the anticipated additional ancillary workload that the project will develop for the MTF, by type of CHAMPUS beneficiary (ADD or NADD), per year. If possible, provide a detailed breakdown of ancillary workload numbers by CPT or DRG. If possible, also provide the estimated cost to the MTF for each CPT and DRG code. MTF Cost/Expense Data. Provide specific Medical Expense and Performance Reporting System (MEPRS) cost elements for the clinical function of the project. If possible, provide a detailed breakdown of MEPRS cost elements by CPT or DRG. CHAMPUS Workload Data. Provide CHAMPUS workload, within the catchment area, currently being accomplished for the clinical function of the project. If possible, provide a detailed breakdown of CHAMPUS workload and cost data by CPT or DRG. Signature and Date. Provide signature of the MTF commander, or his agent, and the date the document was signed. Project Title. Internal Medicine Augmentation and Support. Purpose. The MTF had three internists assigned in fiscal year 1994, two in fiscal year 1995, and will decrease to one by June 1996. MTF workload has shown a concomitant decrease in the average number of outpatient visits, admissions, and occupied bed days. The number of NASs and visits to civilian providers under CHAMPUS has risen to absorb the demand for internal medicine services in the face of decreasing supply within the MTF. This proposed RSA, if approved, would expand the internal medicine services within the MTF and should increase the number of monthly outpatient visits by approximately 900 per month and the number of inpatient admissions by 37 per month. These increases should avoid a shift of approximately 425 outpatient visits per month to CHAMPUS with the loss of a military provider. They should also add an additional 475 outpatient visits per month to the MTF workload. Recognizing that approximately 44 percent of our CHAMPUS beneficiaries are ADDs and that 56 percent are NADDs, and using the appropriate volume trade-off factors, it should also reduce the number of visits that had previously been paid for through CHAMPUS by approximately 212 visits per month. Resources Required. To implement the proposed RSA, additional providers and support personnel will be required. Also, a financial offset for increased costs in ancillary services and supply costs will be necessary. Facility space and equipment are adequate to support the additional workload. Personnel. Internist (board certified or eligible), Nurse (Licensed Vocational Nurse), with attached example of position description. Equipment. None. Supplies. No direct supplies, but, based on fiscal year 1995 MEPRS data, reimbursement for the costs of ancillary services and supplies for outpatient visits above that achieved during the data collection period, fiscal year 1995 (10,188 outpatient visits per year). Estimated at up to 5,700 visits. (For outpatient visits, example shows costs per procedure and per visit for pharmacy, laboratory, radiology, medical supplies, and other supplies.) Also, reimbursement for the cost of ancillary services and supplies for inpatient admissions above that achieved during the data collection period, fiscal year 1995 (404 admissions per year). Estimated at up to 226 admissions. (Example shows ancillary service and supply costs—based upon fiscal year 1995 MEPRS data—per procedure and per admission for same categories as for outpatient admissions.) MTF Workload Data. (Example shows internal medicine direct workload, based on fiscal year 1995 MEPRS data, in terms of outpatient visits and inpatient admissions. It shows also the internal medicine ancillary workload, based on fiscal year 1995 MEPRS data, in terms of pharmacy prescriptions, laboratory procedures, and radiology films per year for outpatient visits and inpatient admissions.) MTF Cost/Expense Data. (Example refers to attachments for MEPRS data for outpatient and inpatient care, based on fiscal year 1995 MEPRS data.) CHAMPUS Workload Data. (Example refers to attachment for CHAMPUS claims data for this catchment area based on claims data from September 1994 through August 1995.) The standardized Internal Resource Sharing Financial Analysis Worksheet is structured to take into account three different types of proposed agreements: (1) the recapture of new workload, (2) the conversion of a partnership agreement, and (3) the replacement of a lost provider. For all of these different situations, the resource sharing worksheet is designed to help the MTF answer two questions: (1) Is the proposed agreement projected to be cost-effective and (2) is the proposed contractor workload credit appropriate? An agreement is deemed cost-effective from the Military Health Services System (MHSS) perspective if the MHSS cost for the agreement (the sum of the MTF’s marginal expenditures and the contractor’s expenditures for the proposed RSA) is less than the government’s share of projected CHAMPUS savings. Assuming the cost-effectiveness test is satisfied, there are two additional criteria for evaluating whether the contractor’s workload credit is appropriate. First, the contractor credit shall not exceed the full credit (that is, 100 percent credit) that would be counted under the Guidelines for Resource Sharing Workload Reporting. Second, a prospective profit rate limit applies to RSAs for which the savings exceed those assumed in the contractor’s best and final offer. For these agreements, the contractor’s projected profit rate on resource sharing expenditures (as calculated by the worksheet) should not exceed the contractor’s overall proposed health care profit rate (on a prospective basis). For example, if a contractor proposed a 5-percent profit rate for health care costs, then the projected contractor profit on resource sharing expenditures exceeding the up-front bid price assumptions should also not exceed 5 percent. A prospective profit limit also applies to an RSA that converts an inpatient partnership agreement that existed in the data collection period (DCP) and for which CHAMPUS admissions were not counted in the DCP data. (In this case, workload credit should be negotiated as necessary to produce a projected contractor net gain approximately equal to zero, since otherwise the contractor would receive an upward price adjustment for additional NASs simply for maintaining the same workload done in the DCP under the partnership agreement.) If both of the previous questions cannot be answered “yes” for the proposed RSA, then the MTF should either renegotiate some of the terms of the proposed agreement (for example, the contractor’s workload credit) or consider other alternatives to the proposed agreement (for example, the task order resource support option). In addition to answering both previous questions for resource sharing in isolation, the resource sharing worksheet is designed to project the cost impact of implementing the agreement under task order resource support rather than resource sharing, including a summary comparison of cost-effectiveness under the two options. Similarly, the worksheet shows the relative financial impact on the managed care support (MCS) contractor of resource sharing versus resource support. (Details on resource support analysis are excluded from this condensed version of the guidance.) Under the MCS contracts, resource sharing savings can accrue to the government in three ways, each of which is addressed in the worksheet. First, for those resource sharing savings investments assumed as part of the contractor’s best and final offer proposal, the contractor’s bid price includes a cost-per-eligible trend factor for resource sharing savings (that is, claims avoidance). Net of the contractor’s expected expenditures on resource sharing, this creates a lower up-front bid price (claims avoidance - resource sharing expenditures = net savings). These net savings are calculated in section I of the worksheet on an average basis (that is, using the contractor’s best and final offer assumption about the average savings to cost ratio for resource sharing). Second, if partial contractor workload credit is negotiated, the government will realize savings in the bid price adjustment for MTF utilization (the “O” factor). This can result in a more favorable bid price adjustment for the government. These savings are calculated in section II of the worksheet. Third, the government will also realize 0, 80, 90, or 100 percent of any residual savings in the risk-sharing corridor, depending on which tier of the risk-sharing corridor applies to the bid price adjustment for the option period. (The contract’s risk-sharing provisions are specified in detail in section G-5 and in appendix C in the Bid Price Adjustment Procedures Manual.) This will result in the government sharing any risk-sharing savings realized by the contractor. These savings are calculated in section IV of the worksheet. MTF commanders or their designated representatives are required to complete the standardized Resource Sharing Financial Analysis Worksheet in negotiating each proposed RSA, in addition to any other analyses prepared by the contractor or the MTF (as specified in section G-5g(2) of the contract). In completing the resource sharing worksheet, users should not be lulled into a false sense of security by focusing on numerical results rather than on underlying assumptions. The accuracy of assumptions such as the number of admissions and/or visits to be recaptured, the MTF’s marginal costs in recapturing these units, and the costs avoided in CHAMPUS are crucial to the accuracy of the spreadsheet’s projections. If estimates are too optimistic, even though the spreadsheet may project net gains for the government, in reality the government may experience net losses. Of course, overly pessimistic estimates can lead the government to miss out on cost-effective opportunities. To use the Financial Analysis Worksheet, the MTF must enter the boxed values on the “MTF Inputs” page. These include (1) the type of RSA, (2) whether the agreement converts an inpatient partnership agreement that previously existed, (3) the option period (year) covered by the proposed agreement, (4) the number of outpatient visits or inpatient admissions enabled by the agreement, (5) the expected government risk-sharing responsibility percentage, (6) the estimated volume trade-off factor used to estimate CHAMPUS avoidance savings, (7) the estimated average government cost per unit for admissions and/or outpatient visits avoided in CHAMPUS for care covered by the agreement, (8) the expected contractor expenditure under the agreement, (9) the projected MTF marginal expenditures, (10) the contractor resource sharing workload credit assumed in the analysis, (11) the sum of the projected resource sharing expenditures for those agreements approved for the lead agent region as a whole, and (12) the expected MTF payment for the contractor’s costs and the MTF’s marginal costs if the resource is acquired under task order resource support rather than resource sharing. As part of the negotiation of the RSA, the MTF commander and the contractor must agree on each estimate or assumption entered on the “MTF Inputs” page before the worksheet is finalized. The remaining sections of the Financial Analysis Worksheet do not require the MTF to enter any data or assumptions. Depending on the results shown on the “summary” page for resource sharing, however, it may be appropriate to revise some of the MTF inputs (for example, the contractor workload credit) on an iterative basis. The “Summary—Resource Sharing” page lists the key results for the proposed agreement under resource sharing. This summary shows (1) whether the proposed contractor workload credit is appropriate, (2) whether government gains exceed government expenditures, (3) the projected contractor net gain under the RSA, (4) the projected government net gain, and (5) whether the proposed agreement reduces the contractor’s actual costs even if the contractor’s net gain is negative due to the average savings assumed up front in the contractor’s best and final offer. (Because the contractor reduced its best and final offer bid price based on an assumption about average savings for each RSA, some actual agreements are expected to produce savings that are smaller than this assumed average, but are still positive. This perspective is particularly relevant for conversion of partnership agreements, since the contractor is not likely to achieve new savings simply for continuing previous partnership agreements under the same terms as RSAs. The net contractor gain after taking account of average up-front savings from the best and final offer is likely to be negative, yet converting a cost-effective partnership agreement allows the contractor to avoid an increase in CHAMPUS claims costs that would otherwise result.) If the “Summary—Resource Sharing” page shows that the contractor workload credit is not appropriate and/or government gains do not exceed government expenditures, then one option for the MTF is to adjust the proposed contractor workload credit on an iterative basis until the proposed agreement satisfies both requirements. It may also be appropriate for the MTF to renegotiate other terms of the proposed agreement (for example, the level of resources to be provided by the contractor). If it is not possible to determine a workload credit percentage that results in a “yes” response to both questions, given all of the other input assumptions agreed upon by the MTF commander and the contractor, then the proposed RSA should not be approved (unless the lead agent determines that the proposed agreement still warrants approval due to compelling circumstances). The resource sharing worksheet page has five sections. Section I estimates the net resource sharing savings under this agreement that would already be reflected in the contractor’s proposed bid price, based on the average-savings-to-cost ratio used to develop the resource sharing savings trend factor in the contractor’s best and final offer. Section II estimates the effect of the RSA, including the contractor’s workload credit, on the MTF utilization adjustment in the bid price adjustment formula (that is, the “O” factor adjustment). Section III estimates the actual savings (that is, cost avoidance) in CHAMPUS health care costs as a result of the RSA. Section IV estimates the residual gain in CHAMPUS (that is, the difference between the adjusted bid price for health care costs and the actual health care costs) under the proposed RSA. The section also estimates the government and contractor portions of these gains, since the gains would be subject to risk sharing between the government and contractors. Section V provides the two necessary results of this analysis (for an assessment of resource sharing in isolation). First, is the contractor credit for resource sharing workload assumed in the analysis appropriate? Second, does the analysis indicate that the proposed RSA would be cost-effective for the government from the MHSS perspective? The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) use of support contracts to help deliver health care and to control costs, focusing on: (1) whether resource sharing savings are meeting DOD's projections and thus helping control TRICARE costs; (2) what problems DOD might be encountering in pursuing resource sharing; and (3) actions and alternatives pursued by DOD to overcome those problems. GAO also considered the implications of resource sharing within the broader context of TRICARE's overall cost-effectiveness. GAO noted that: (1) DOD and the contractors have made agreements likely to save about 5 percent of DOD's overall resource sharing savings goals; (2) new agreements are being considered, but neither DOD nor the contractors are confident that pending agreements will be reached or that further cost savings can be attained; (3) because resulting TRICARE contract costs may be greater than anticipated, both parties may experience related financial losses; (4) problems impeding progress on resource sharing agreements and the related savings have included lack of clear program policies and priorities, uncertainty about cost effects on military hospitals, lack of financial rewards for the hospitals entering into such agreements, and changes in military hospital capacities after contractors developed bids; (5) in response, DOD has revised policies, improved training and analytical tools, and taken other steps to promote resource training under the contracts, but to date, these efforts have not been sufficient to bring needed results; (6) for the last two contracts, DOD is applying a revised financing approach that includes resource sharing but at a reduced level; (7) the new approach allocates more funds to the military hospitals and less to the contractors, enabling the hospitals to directly acquire and use outside resources rather than use resource sharing with the contractor; (8) how the military hospitals, other sources, and contractors interact under the new approach is still being defined and has not been tested; resource sharing problems will not be automatically eliminated and may be exacerbated when used in combination with revised financing; (9) for the future, DOD plans even broader changes intended to simplify military hospital budgeting and support contract operations; (10) while the military hospitals and contractors could still use resource sharing, it no longer would be the basis for projecting major savings and lowering bids at the contract's outset; (11) DOD officials acknowledged their resource sharing savings problems but told GAO that lower than expected contract award prices have led to over $2 billion in unexpected, offsetting savings; (12) while TRICARE's overall cost-effectiveness was beyond GAO's review scope, there are reasons to question the currency and analytical completeness of DOD's preliminary savings claims; and (13) GAO supports DOD's current plans to undertake a detailed analysis, based on more up-to-date cost data and estimates, of TRICARE's overall cost-effectiveness.
The friendly fire casualties and equipment losses suffered during Operation Desert Storm reilluminated an old problem, fratricide, and underscored the need for more effective means of identifying friendly and hostile forces, and neutrals and noncombatants on the battlefield (i.e., combat identification). Studies and incidents subsequent to Operation Desert Storm, such as the friendly forces shootdown of two Blackhawk helicopters over Iraq during Operation Provide Comfort, have reiterated the need for improved combat identification. Combat identification has been defined as “the means to positively identify friendly, hostile and neutral platforms in order to reduce fratricide due to mis-identification and to maximize the effective use of weapon systems.” The services are pursuing a number of solutions to provide combat identification. They believe the solution will involve a “system of systems,” one component of which will be cooperative IFF Q&A systems. In March 1992, the Joint Requirements Oversight Council approved a mission need statement for combat identification. That mission need statement requires positive, timely, and reliable identification of hostiles, friendlies, and neutrals; classification of foes by platform, class/type, and nationality; interoperability between services; and interoperability with minimum civil air traffic control system requirements. It states that the primary constraint is affordability. A cooperative IFF Q&A identification is accomplished when a shooter/observer queries a target and the target answers with a reply identifying itself as a friend. A Defense Acquisition Board review conducted on August 14, 1992, and subsequent approval from the Under Secretary of Defense for Acquisition, gave (1) the Army the lead in battlefield combat identification (BCI) efforts, including cooperative systems for ground-to-ground and air-to-ground identification, and (2) the Navy the lead for air-to-air and ground-to-air cooperative identification systems. The Navy was charged with coordinating these efforts. Figures 1 and 2 depict the current breakdown of responsibility for cooperative IFF Q&A systems development. To enhance force warfighting capability and minimize fratricide in the future, the Army has been pursuing a BCI program to improve situational awareness and provide immediate, positive target identification. In 1991, the Army started implementing a five-phased program to develop and field battlefield identification techniques through fiscal year 2000. The Army is actively into the third phase of the program, the near-term phase, whose objective is to integrate a battlefield combat identification system (BCIS) into selected ground vehicles and helicopters. A millimeter wave cooperative IFF Q&A system was selected for BCIS as the near-term technology. The near-term cooperative IFF Q&A system is currently in engineering and manufacturing development (EMD). The Army is acquiring 45 EMD models and is planning to acquire another 115 in fiscal year 1996 to be demonstrated during the fiscal year 1997 digitized brigade experiment known as Task Force XXI. The Army currently estimates the cost of providing the near-term BCIS to 6,400 selected platforms of Force Package I at between $250 million and $300 million. The ultimate cost of BCIS would be substantially higher if all Army divisions were to be equipped. The Army is just beginning the mid- and long-term phases of its BCI efforts with the development of a COEA to identify affordable and promising alternatives. The objectives of the mid- and long-term phases are to integrate situational awareness and target identification and to have an automated correlation and display of situational awareness and target identification information. The mid- and long-term cooperative IFF Q&A system may be different than the near-term technology. As the lead for cooperative aircraft identification development, the Navy has been working on its Cooperative Aircraft Identification (CAI) effort to address deficiencies in the currently used aircraft identification system, Mark XII. The CAI effort is to provide a system to replace or upgrade the Mark XII system for use in air-to-air and ground-to-air identification. Navy officials have estimated that a Mark XII follow-on system could cost more than $3.5 billion. In addition to providing reliable, secure identification of friends, any Mark XII follow-on system will have to ensure future civil aviation air traffic control compatibility. Mode S is a civil aviation air traffic control capability started by the United States and now planned for international use. Eurocontrol, the European aviation authority, has mandated Mode S usage by January 1, 1999. Mark XII transponders, however, do not currently incorporate Mode S. Without this capability, U.S. military aircraft may face delays in the use of civil airspace or may even be excluded from certain regions during peacetime. In June 1994, the Naval Research Laboratory completed and published a draft COEA for the CAI effort. That COEA was not approved because some Navy officials believe it did not consider subsequently proposed alternatives that should be considered. Additionally, the Navy provided only about half of the funding required for fiscal years 1996 and 1997 to accomplish a 1997 scheduled decision on whether to move the CAI effort forward to the next phase of the acquisition process—demonstration and validation. The services’ current management plan and structure for cooperative IFF Q&A systems, which reflect the division of responsibility between the Army and the Navy, lack needed cohesiveness. While the Army and the Navy have worked to coordinate their efforts, the separation of responsibility between the two services may result in the selection of suboptimal solutions, unnecessary program delays, and the acquisition of systems that may not be interoperable across the services. The services defined the management structure for their efforts to combat fratricide in a December 1992 memorandum of agreement on combat identification. In its capacity as lead for the services’ cooperative IFF Q&A systems development, the Navy led the development of the management section of DOD’s September 1993 Joint Master Plan for Cooperative Aircraft and Battlefield Combat Identification. The plan provides a management approach that is intended to coordinate cooperative identification requirements development and management mechanisms to ensure development, procurement, and integration of interoperable surface and air identification systems. As shown in figure 3, the management structure identified in the plan uses the organizations defined in the services’ memorandum of agreement on combat identification. The principal coordinating bodies identified in the plan are the General Officers’ Steering Committee for Combat Identification, the Joint Combat Identification Office, the Service Acquisition Executive Council, and the Senior Advisory Group. The General Officers’ Steering Committee for Combat Identification provides senior level review and coordination of all Army, Navy, Air Force, and Marine Corps combat identification requirements, development and procurement efforts, product improvements, and related technologies. The Joint Combat Identification Office provides action officer level coordination and functions as the primary information center for all combat identification issues, programs, requirements, and technologies. The Senior Acquisition Executive Council was established to provide the highest level of service coordination, while the Senior Advisory Group is to provide program manager level coordination. The separation of responsibility for the development of cooperative IFF Q&A systems between the Army and the Navy is not conducive to looking for and finding common technological solutions. For example, a DOD official informed us that a North Atlantic Treaty Organization ally has demonstrated a laser interrogation and D-band response system for ground-to-ground identification. Since the Mark XII system operates in the D-band, the adoption of a D-band ground-to-ground system, if feasible, could be a cost-effective solution providing interoperability among the services. The Army, however, has not considered that D-band system or one like it for ground-to-ground identification. Even absent the identification of a common technology, the current management plan and structure have allowed the services to pursue systems without fully considering how and if those systems can cost effectively be made interoperable. For example, the Navy’s COEA could not fully consider the equipment that would be needed to unify the Army’s mid- and long-term approach with the Navy’s CAI system because that approach has not been defined. Since a Mark XII follow-on wave form has not been identified for CAI, the Army will have similar difficulties. While the Army and the Navy have worked to coordinate their efforts, the current management structure and plan perpetuate the stovepipe development of cooperative IFF Q&A systems. In commenting on an earlier draft of the management plan, the Under Secretary of Defense for Acquisition stated “. . . I am concerned that the ‘stovepipe’ management scheme shown . . . will not enable possible equipment interoperability and commonality to be realized between aircraft and battlefield systems.” A Navy official informed us that the plan’s developers added the Senior Acquisition Executive Council and Joint Senior Advisory Group to the plan’s organizational chart to address this criticism. However, these organizations were already defined in the draft plan because they were included in the services’ memorandum of agreement on combat identification. Furthermore, the delays in developing a new air-to-air and ground-to-air cooperative IFF Q&A system combined with the recent Army start of its mid- and long-term efforts provide an opportunity to address the Under Secretary’s concerns through joint management of the Army’s and Navy’s efforts. The current management structure also risks unnecessary delays in the development and fielding of a set of systems planned to help prevent future fratricide by allowing the services to prioritize their efforts differently. For example, while DOD has made development of combat identification systems a high priority, the Navy, through its funding process, did not make CAI a high priority. Given the high priority DOD places on combat identification, the Office of the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence (ASD/C3I) proposed an Office of the Secretary of Defense (OSD) funding line for the CAI effort to (1) pursue the planned course of development or (2) alternatively use an advanced concepts and technology demonstration (ACTD) to accelerate it. OSD decided not to proceed with the Program Objective Memorandum proposed strategy. Instead, OSD adopted the position that the existing Mark XII system satisfies the services’ current air-to-air and ground-to-air cooperative IFF Q&A system requirements. Its current strategy is an evolutionary upgrade of Mark XII equipment to provide improved reliability and maintainability and greater upgradeability, while over the next couple of years defining, under a continued Navy-led effort, in coordination with our allies, what a Mark XII follow-on wave form might look like. The upgradeability of the new Mark XII system would allow for the addition of Mode S capability and implementation of the Mark XII follow-on wave form, should the services later decide that the current wave form no longer satisfies their requirements. If the Program Objective Memorandum proposal had been adopted, it would have alleviated any risk of delay in Mark XII efforts due to low prioritization. However, it would not have corrected the stovepipe nature of the management structure and plan. Even given OSD funding of the Mark XII effort, the continued division of development under the current management plan and structure would allow the Army’s and the Navy’s efforts to continue unsynchronized. The Navy’s initial time lead in development resulted in the Navy and Army programs being unsynchronized. The current delays in defining a Mark XII follow-on wave form, however, provides an opportunity for a jointly managed effort. The original Navy time lead is what led to (1) the Navy’s uncertainty about likely mid- and long-term ground identification systems and (2) the Navy’s inability to consider in its COEA the equipment necessary to obtain interoperability with those ground identification systems. Schedule changes in separate efforts could again result in difficulties obtaining full consideration of interoperability issues. Separate service efforts risk delays in the development and fielding of cooperative IFF Q&A systems due to time and resource expenditures to obtain “after-the-fact” interoperability, if required. Additionally, a dual management structure means dual funding of dual efforts when a single management structure and funding source could provide efficiencies resulting in not only monetary savings but also faster development and earlier fieldings. “. . . has been completed, and alternatives for improving the system and applying it to armor identification were to be considered at a CAI Milestone I review originally planned for the fall of 1993. However, the Services do not place a high priority on upgrading the Mark XII or on defining and demonstrating an integrated aircraft/armor identification system, and they have yet to schedule the review.” We believe ASD/C3I’s proposed action was a step in the right direction and that a single funding line for both the Navy’s CAI effort and the Army’s BCI program would help ensure coordinated aircraft and ground cooperative IFF Q&A systems development. While ASD/C3I’s proposed action would have alleviated some of the unnecessary risk currently associated with the services’ management structure and plan, it would not have corrected the stovepipe nature of that structure and plan. We believe, therefore, that, in addition to having a single funding line, those efforts should be managed under a structure similar to that recommended by DOD’s Acquisition Reform Process Action Team in its recent report on reengineering the acquisition process. “the creation of a Joint Acquisition Executive permits the DOD to directly address the long-standing problems encountered by joint programs. Issues of agreement on requirements, dictated marriages and shifting priorities are avoided by having the programs placed under a purple-suited decision maker who has fiscal resource management authority. No single Component will be able to optimize system performance at the expense of other users . . . .” The team stated that the advantages of such a management structure included reduction of program redundancy, promotion of commonality across the services, and stabilization of funding by removing funds from the vagaries of each service’s priorities. We believe that the adoption of the proposed management structure outlined by the team could help ensure the development of a cost-effective, integrated combat identification solution(s) while maintaining appropriate OSD oversight. The Army’s and the Navy’s development of separate COEAs for their respective BCI and CAI efforts risks the selection and development of systems that may not represent the most cost and operationally effective solution(s). The division of responsibility for cooperative IFF Q&A systems development between the Army and the Navy raises interoperability issues. Ground and air platforms that represent threats to each other and that are provided cooperative IFF Q&A systems based on different technologies will either have to field dual systems or systems that have been made interoperable or will remain at risk of fratricide from each other. COEAs that do not fully consider the desirability of interoperability, the way to obtain it, and its cost risk suboptimal solutions. In providing guidance on COEAs, DOD Instruction 5000.2 notes that “individual systems generally cannot be evaluated in isolation.” It goes on to state that “. . . the analysis must consider all relevant systems and the synergisms, such as interoperability, and potential difficulties they collectively represent on the battlefield.” The development of separate COEAs for IFF Q&A systems has not allowed and may not allow proper consideration of the interoperability issue and thus risks the selection of a suboptimal solution(s). A DOD official expressed concern about this risk when commenting on the plan to perform separate COEAs for the BCI and CAI efforts during the first meeting of the Combat Identification COEA Oversight Group that OSD established to periodically review the two COEA efforts. Additionally, Naval Research Laboratory officials who conducted the CAI COEA stated that because the Army’s selected near-term technology differed so dramatically from their expectations, the BCIS initially envisioned in their CAI COEA was made irrelevant by the Army selection. Without an approved Mark XII follow-on wave form identified, the Army will face the same difficulty addressing interoperability in its currently started BCI COEA effort. The performance of a joint COEA now, giving due consideration to the interoperability issue, will help assure the selection and development of the most cost and operationally effective solution(s). The recent delays in the Navy’s efforts combined with the Army’s recent start of its mid- and long-term BCI COEA provide an opportunity to develop a joint COEA for combat identification. A DOD official stated that an agreement with the allies on a Mark XII follow-on wave form should be accomplished within 2 years. The current Army schedule calls for the mid- and long-term COEA to be completed in fiscal year 1997, which provides time for a joint COEA effort to consider the new wave form being discussed with U.S. allies, expand the Navy’s COEA to consider subsequently proposed solutions, and merge the work with the Army’s COEA efforts. A joint COEA would ensure that DOD and the services have a joint analysis that will help to select systems representing the most cost and operationally effective integrated solution. The Army continues to invest in its near-term millimeter wave cooperative system when there is no discernible indication whether this system can or should be integrated into mid- and long-term solution(s). Without a completed COEA for BCI, there is no way to tell whether the near-term system should be or will be a part of the mid- and long-term solution(s). Furthermore, the Army may never choose to make large scale fieldings of the near-term system due to affordability. In our prior report on combat identification, we noted that the Army planned to begin procuring the near-term millimeter wave cooperative identification system without an analysis of whether the near-term system could be integrated into the mid- and long-term solution(s). At that time, we recommended that the Army not begin procurement of the systems until it had determined whether the near-term systems could be integrated into the mid- and long-term solution(s). DOD agreed that the integration of the near-term BCIS into the long-term approach is an important consideration in deciding on the production of the near-term system. Nevertheless, the Army now plans to acquire more near-term systems than are necessary to reach a production decision without the analysis we suggested. Our current evaluation showed that the Army plans to use $5 million in fiscal year 1995 funds and has requested about $18.4 million of fiscal year 1996 funds to acquire 115 additional near-term systems beyond the 45 in its current EMD contract. The Army intends to use these units, in combination with 25 refurbished EMD units, in the testing of the digitized battlefield concept. However, the Army did not develop a specific analysis to support the need to demonstrate 140 BCISs during the digitized brigade experiment. Rather, Army officials stated that the goal of the near-term BCIS demonstration was to sell individual soldiers on the system and provide higher level Army officials with an understanding of its operational effectiveness. They noted that the more soldiers supporting the acquisition of the system the better. This formed the basis for their “the more, the better” rationale. Given funding and time constraints, 115 systems are all “the more” that can be acquired. The Army has already awarded a contract option to obtain 45 of the additional 115 systems and expects to award a second option at a cost of about $15.2 million for the remaining systems in July 1995. When questioned about the impact of limiting the demonstration to 70 systems (i.e., those already on hand or on contract), Army program officials stated that they could accomplish their goals for the demonstration with 70 systems. There are concerns within DOD and the Army over the affordability and cost-effectiveness of the near-term system, and it may never be fielded for these reasons. The selection of a cooperative technology to pursue in the mid- and long-term will be determined in part by the Army’s mid- and long-term COEA, an effort that has just started. Until a mid- and long-term cooperative technology is selected, the continued acquisition of the near-term system risks wasting millions of dollars on a system that may not be able to be integrated into the mid- and long-term solution(s). Furthermore, acquiring more systems to demonstrate during the Task Force XXI exercise than is necessary to accomplish the goals of that demonstration also risks millions of dollars. We recommend that the Secretary of Defense (1) create a single OSD funding line for the Army’s BCI and Navy’s CAI efforts, (2) direct the Secretaries of the Army and the Navy to develop and institute a cohesive management structure and plan in line with the Process Action Team’s recommendation, and (3) direct the Secretaries of the Army and the Navy to develop a joint COEA for their BCI and CAI efforts giving due consideration to the problem and costs of obtaining systems’ interoperability. We also recommend that the Secretary of Defense direct the Secretary of the Army to (1) use the 70 near-term systems on hand or currently under contract for the Task Force XXI digitized brigade experiment and (2) not acquire more near-term systems than necessary until the Army determines the near-term technology is affordable and will be fielded and whether, if determined desirable, it can be integrated into the mid- and long-term combat identification and aircraft solution(s). In commenting on a draft of this report, DOD agreed that the requirements for aircraft and battlefield identification should not be addressed in isolation. It stated that this was one of the reasons they formed a Combat Identification Task Force in October 1994. DOD stated that the task force was created to consider the overall architectural framework for combat identification and within that architecture, the techniques and programmatic plans for battlefield identification and for the Mark XII identification system. DOD also stated that management actions are being taken that reflect the results of the task force, and that address the concerns described in our report. Specifically, DOD stated that a joint COEA on battlefield identification is being organized, and technology demonstrations that will be an important element of the evaluation will be guided and partially funded by OSD. DOD did not agree that Army’s plan to acquire 140 near-term systems for the Task Force XXI digitized brigade experiment risked wasting millions of dollars. In discussing Army officials’ comments made to us that they could accomplish their goals for the experiment with 70 near-term systems, DOD stated that the adequacy of 70 systems was judged in the context of a contingency plan, should 140 systems not be available. It also stated that the acquisition of more units would result not only in more operational experience and more data but also in a greater capability left with the forces. DOD partially agreed with our recommendation that the Army be directed to not acquire more near-term systems prior to a determination that the near-term system is affordable and will be fielded and whether it can be integrated into the mid- and long-term solution(s). DOD noted that although the integration of the near-term system into the long-term solution is an important consideration, it may be prudent to produce the near-term system even if it is not part of the long-term architecture and noted that it was concerned that, without a near-term system, U.S. forces may face a period of 10 years or more with no substantial improvement in their ability to identify combat vehicles. While DOD, in forming its Combat Identification Task Force, may have been motivated by many of the same concerns expressed in our report, it does not appear that the task force’s final product will address the issues identified in our report. Specifically, based on briefings we have received on the task force’s efforts, the task force’s final product will not (1) address needed management changes to provide cohesiveness in the services’ cooperative identification development efforts; (2) dictate a joint, single COEA for those efforts; and (3) address the Army’s plan to acquire more near-term systems than are required for the Army to reach a production decision. Furthermore, while the task force has developed an overall architectural framework for combat identification, it does not appear to provide the management structure and plans needed to assure a cohesive effort to obtain the goals of that architecture. The architecture provides direction to the services. However, in the past, DOD has provided direction to the services that was subsequently ignored. For example, as we noted earlier, while DOD has placed a high priority on combat identification efforts, the Navy did not place a high priority on its CAI effort and underfunded it. Regarding the Army’s plan to acquire more near-term systems than are necessary to accomplish the Army’s goals for the Task Force XXI experiment, the DOD’s comment indicates that 70 systems are adequate for conducting the experiment if 140 systems are not available. Since the Army has not yet made a procurement decision for the near-term BCIS, the expenditure of $15.2 million to acquire 70 systems beyond the 70 necessary to accomplish the goals for the demonstration risks millions of dollars on a system that may never be fielded. If the Army can accomplish its goals for the demonstration with 70 systems, as Army officials have repeatedly informed us, then only 70 systems are needed. Furthermore, the Army did not produce and does not have an analysis to support a requirement to demonstrate 140 BCIS units. There is no debating that more units will provide more operational experience and data. This, however, should not be the basis for acquiring more systems than are needed to accomplish the goals of the demonstration. DOD’s comment stating that it might be prudent to produce the near-term system even if it is not a part of the long-term architecture is not debated, and our recommendation would not prevent the Army from fielding any system for 10 years. We simply believe it would be prudent for the Army to make its production decision for the near-term system taking into consideration its decision for its mid- and long-term solution(s). Such a determination should be possible once the BCI COEA is completed. Since that COEA is currently scheduled to be completed in fiscal year 1997 and the BCIS production decision is currently scheduled to occur in late fiscal year 1997 or early fiscal year 1998, our recommendation would not delay the fielding of the near-term system. DOD’s comments are reprinted in their entirety in appendix I, along with our evaluation. During this review, we interviewed officials and reviewed documents in Washington, D.C., at the offices of the ASD/C3I; the DOD Joint Combat Identification Office; the Assistant Secretary of the Navy for Research, Development, and Acquisition; the U.S. Navy, Air Traffic Control and Landing Systems Office; the U.S. Navy, Office of the Director of Navy Space Systems Division; the Naval Research Laboratory; and the Defense Intelligence Agency. We also reviewed documentation issued from the offices of the Under Secretary of Defense for Acquisition and Technology, the Joint Requirements Oversight Council, the Congressional Research Service, and the Office of Technology Assessment. We visited, received, and analyzed information from the U.S. Army Communications and Electronics Command, Fort Monmouth, New Jersey; the U.S. Army Training and Doctrine Command, Fort Monroe, Virginia; the U.S. Army Armor Center and School, Fort Knox, Kentucky; the U.S. Army Aviation Center, Fort Rucker, Alabama; and the Headquarters of the U.S. Marines Corps’ Combat Development Command, Quantico, Virginia. In addition, we visited and received briefings on the Air Force’s Combat Identification Integration Management Team from Air Force personnel at the Directorate of Special Projects, Electronic Systems Center, Hanscom Air Force Base, Massachusetts. We also visited and received briefings on the OSD sponsored Joint Air Defense Operation/Joint Engagement Zone exercises from service personnel at Eglin Air Force Base, Florida. We conducted this review from August 1994 to July 1995 in accordance with generally accepted government auditing standards. We are sending copies of this report to other appropriate congressional committees; the Director, Office of Management and Budget; and the Secretaries of Defense, the Army, and the Navy. We will also make copies available to other interested parties upon request. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. Major contributors to this report were William L. Wright, Bruce H. Thomas, and Peris Cassorla. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated July 6, 1995. 1. We appreciate that DOD shares our concern that the requirements for aircraft and battlefield identification should not be addressed in isolation. While this was one of the reasons the Combat Identification Task Force was formed, and we believe the task force’s efforts were a step in the right direction, we do not believe the task force adequately addressed our concerns regarding the cohesiveness of the structures and plans created to manage the services’ aircraft and battlefield cooperative identification efforts. 2. The management actions discussed represent a continuation of the stovepipe management of the ground and air identification efforts discussed in our report. The Army-led development of a cost and operational effectiveness analysis (COEA) for battlefield identification separate from the Navy-led similar analyses planned to help define what a new air identification wave form might look like perpetuates the stovepipe development scheme identified in our report. 3. As explained in our report, we focussed our evaluation on the services’ cooperative identification of friend or foe (IFF) question and answer (Q&A) system efforts because the services are approaching major decision points in the acquisition process for those systems. To address DOD’s concern, we have added information to the body of our report indicating that the services’ cooperative IFF Q&A system development efforts are a part of a much broader array of efforts that should help minimize friendly fire incidents. 4. At the time our draft report was written, we were aware of the task force’s efforts. We determined that while those efforts may have been motivated by many of the same concerns expressed in our report, it did not appear that the task force was going to address the issues related to our findings. Based on more recent briefings on the task force’s outcome, it still does not appear that those issues were addressed. Specifically, based on the information we have received in briefings on the task force’s efforts, the task force’s final product will not (1) address needed management changes to provide needed cohesiveness in the services’ cooperative identification development efforts; (2) dictate a joint, single COEA for those efforts; and (3) address the Army’s plan to acquire more near-term systems than are needed to reach a production decision. 5. While the task force has developed an overall architectural framework for combat identification, it does not appear to provide the management structure needed to assure a cohesive effort to attain the architecture’s goals. The architecture provides direction to the services. However, in the past, DOD has provided direction that was subsequently ignored. For example, as noted in our report, while DOD has placed a high priority on combat identification efforts, the Navy did not place a high priority on the Cooperative Aircraft Identification (CAI) effort and underfunded it. 6. The management and funding arrangements being established do not adequately address our concerns. The joint advanced concepts technology demonstration and advanced technology demonstrations planned are to demonstrate candidate battlefield identification systems, that is, ground-to-ground and air-to-ground solutions. The planned demonstrations are to focus on battlefield identification solutions, not on battlefield and aircraft identification solutions and their interoperability. Furthermore, the planned demonstrations will not address the underlying management structure’s division of responsibility between the Navy and the Army and the risks that are associated with that division. The continued use of the current management plan and structure with its division of responsibility between the Army and the Navy still risks the selection of suboptimal solutions, unnecessary program delays, and the acquisition of systems that may not be interoperable across the services. 7. The Office of the Secretary of Defense’s (OSD) role as the top level manager of these demonstrations and the funding of these demonstrations under an OSD line do not adequately address our concerns regarding the cohesiveness of the Army’s and the Navy’s efforts and the need for truly joint management. Under the current DOD plan, that is, the joint demonstrations, the Army and the Navy will continue to manage and fund separate developmental efforts for their respective areas of responsibility. Continued use of separate funding lines for those efforts will continue to pose interoperability risks and risks to the timely accomplishment of the most cost and operationally effective solutions. 8. While the evolutionary nature of the upgrade process and the reliance on commercial technology may or may not make centralized funding desirable, the DOD’s adopted strategy includes working with U.S. allies to define what a new wave form might look like. The services’ new wave form definition efforts will be a joint effort under a Navy lead, just as the Navy’s original CAI effort was. We maintain our position that funding the services’ new aircraft wave form definition and ground identification efforts under a single funding line would help ensure coordinated aircraft and ground cooperative IFF Q&A systems’ development. 9. As we indicated in our report, while the Army and the Navy have worked to coordinate their efforts, the current management structure and plan perpetuate the stovepipe development of cooperative IFF Q&A systems. As noted in our report, in commenting on an earlier draft of the management plan for the cooperative IFF Q&A development efforts, the Under Secretary of Defense for Acquisition stated “. . . I am concerned that the ’stovepipe’ management scheme shown . . . will not enable possible equipment interoperability and commonality to be realized between aircraft and battlefield systems.” The General Officers’ Steering Committee on Combat Identification, the Joint Requirements Oversight Council, and the Joint Combat Identification Office were all defined in the draft and final management plans. Despite these coordinating bodies, we agree with the Under Secretary’s assessment and believe the current management structure continues to perpetuate that stovepipe management scheme. 10. We have added information on the role of the General Officers’ Steering Committee on Combat Identification to our report. 11. A prioritized list of identification initiatives with service funding commitments did not prevent the Navy from placing a lower priority on its CAI effort than DOD placed. As we note in our report, while DOD has made development of combat identification systems a high priority, the Navy, through its funding process did not make the CAI effort a high priority. Again, a single OSD funding line for both the Navy’s new wave form and the Army’s battlefield combat identification system (BCIS) efforts would help ensure coordinated aircraft and ground cooperative IFF Q&A systems development efforts and appropriate funding given DOD’s prioritization of those efforts. 12. While DOD’s adopted Mark XII upgrade strategy has superseded the Navy-led COEA, the continued research and development of air and ground systems without performing a joint COEA still risks the selection and development of systems that may not represent the most cost and operationally effective solutions. DOD’s adopted strategy for upgrading the Mark XII includes working with U.S. allies to define what a follow-on Mark XII wave form might look like. In providing oral comments on a draft of this report, agency officials indicated that the new wave form air identification effort would include cost and operational effectiveness type analyses. Those analyses should be done as a part of a joint aircraft and ground identification COEA to ensure that the most cost and operationally effective ground and air solutions are selected giving due consideration to the interoperability issue. We recognize that commonality between air and ground identification systems may or may not be attainable or desirable from a cost and operational effectiveness standpoint. In fact, a joint COEA may support the use of different technologies for air and ground systems. The performance of a joint COEA, however, will not only help ensure the consideration of technological commonality between air and ground solutions but also the cost and operational effectiveness of solutions to provide interoperability between differing air and ground solutions. Because the Navy-led joint service new wave form air identification effort is to develop cost and operational effectiveness type analyses and the Army-led joint service ground identification effort is developing a formal COEA, it appears that minimal adjustment would be required to combine the two efforts to obtain a joint COEA ensuring due consideration of the interoperability issue. In addition, the final product of a joint COEA would present a service-wide, unified vision of the air and ground solution(s) to be pursued and the means, if determined attainable and desirable, by which air and ground interoperability will be obtained. 13. DOD’s comment indicates that 70 systems are adequate for conducting the demonstration if 140 systems are not available. Since the Army has yet to determine whether it will procure the near-term BCIS, the expenditure of $15.2 million to acquire 70 systems beyond the 70 systems necessary to accomplish the goals for the demonstration risks millions of dollars on a system that may never be fielded. 14. At issue here is not whether the demonstration of more systems will have value, but rather the value of what is gained against the cost and the risk that the Army may never procure and field the BCIS. The Army did not produce and does not have an analysis to support a requirement to demonstrate 140 BCIS units. If the Army can accomplish its goals for the demonstration with 70 systems, as Army officials have repeatedly informed us, then only 70 systems are needed. There is no debating that more units will provide more operational experience and data. This, however, should not be the basis for acquiring more systems than are needed to accomplish the goals of the demonstration. “Believe it is imperative that during Force XXI we not only evaluate how well BCIS works but the total impact BCIS has on the way we operate. Platforms are currently prioritized to give us the ability to look at this total impact even if we don’t get the entire 140 systems we are currently planning for.” 19. We have clarified our recommendation in view of DOD’s comments. We believe that the acquisition of near-term systems should be limited to the minimum quantity required to complete any testing needed to make a production decision. Furthermore, the Army should not be allowed to acquire more near-term systems than that limit until a COEA based determination has been made that the near-term system, if deemed desirable, can be integrated, that is, made interoperable, with the mid- and long-term combat identification and aircraft solutions. 20. We recognize that commonality between air and ground identification systems may or may not be attainable or desirable from a cost and operational effectiveness standpoint, just as interoperability of differing air and ground systems may not be determined attainable or desirable. Nothing in our report dictates commonality. It does, however, argue that a joint COEA should be completed to assess this issue before moving forward. 21. As we pointed out in our response to DOD in our prior report, our recommendation will not prevent the Army’s acquisition of the near-term system and will not require the Army to wait until long-term systems are fielded. As stated in our prior report, we believe it would be prudent for the Army to make its production decision for the near-term system taking into consideration its decision for its mid- and long-term solution(s). Such a determination should be possible once the BCI COEA is completed. Since that COEA is currently scheduled to be completed in fiscal year 1997 and the BCIS production decision is not scheduled to occur until late fiscal year 1997 or early fiscal year 1998, our recommendation would not delay the fielding of the near-term system. Our current recommendation extends the recommendation in our prior report to include a determination on interoperability with the new air identification wave form being defined. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Army's and Navy's development of combat identification systems to reduce the occurrences of friendly fire incidents, focusing on the services' management plans and structures for the systems' development and integration. GAO found that: (1) the Army and Navy do not have a cohesive management plan and structure for the development of their cooperative combat identification systems; (2) the lack of cohesiveness reflects the division in the services' responsibilities for developing systems for different combat modes; (3) the two services have based their system development plans on different technologies and have not fully addressed how and at what cost these systems will be integrated; (4) the lack of a cohesive management structure could lead to development and deployment delays by allowing the services to prioritize their efforts differently; (5) the Navy has developed a cost and operational effectiveness analysis (COEA) for its system, but the Army is just beginning to develop its COEA; (6) the development of separate COEA risks wasting resources because of duplication and delays in system development and deployment and does not address the need for interoperability; (7) the DOD proposal for a single funding line for system development would help ensure better cooperative systems development; and (8) the Army plans to procure more near-term identification systems than it needs for its planned field demonstration and without knowing if the systems are affordable and can be integrated into long-term solutions.
CSP, called for under section 2001 of the 2002 farm bill, is a voluntary conservation program that supports ongoing stewardship of private and tribal agricultural lands by providing payments to producers for maintaining and enhancing natural resources. According to USDA, CSP identifies and rewards those farmers and ranchers who are meeting the highest standards of conservation and environmental management on their operations, while creating powerful incentives for other producers to meet those same standards of conservation performance. In turn, the conservation benefits gained will help these farms and ranches to be more economically and environmentally sustainable while increasing natural resources benefits to society at large. CSP provides financial and technical assistance to agricultural producers who advance the conservation and improvement of soil, water, air, energy, plant and animal life, and other conservation purposes on working lands. Such lands include cropland, grassland, prairie land, improved pasture, and rangeland, as well as forested land and other noncropped areas that are an incidental part of the agriculture operation. The program is available in all 50 states, the District of Columbia, the Commonwealth of Puerto Rico, Guam, the Virgin Islands of the United States, American Samoa, the Commonwealth of the Northern Marianna Islands, and the Trust Territory of the Pacific Islands. Under the farm bill, the program is open to all agricultural producers, regardless of size of operation, crops produced, or geographic location. CSP is administered by NRCS. In implementing CSP, NRCS emphasizes soil and water quality as nationally important resource concerns because of the potential for significant environmental benefits from conservation treatment that improves their condition. Thus, although the farm bill required producers to treat at least one resource of concern under CSP, NRCS program regulations require producers to treat at least two resources—soil and water—to be eligible for the program. Producers can use CSP payments to fund a variety of soil and water quality conservation practices. Soil quality practices include crop rotation, planting cover crops, tillage practices, prescribed grazing, and providing adequate wind barriers. Water quality practices include conservation tillage, strip cropping, vegetative filter strips, terraces, grassed waterways, managed access to water courses, nutrient and pesticide management, prescribed grazing, and irrigation water management. In addition, under the farm bill and NRCS regulations, to be eligible for CSP, both the producer and the producer’s operation must first meet several basic eligibility criteria, including (1) the land must be private agricultural land, forested land that is an incidental part of an agricultural operation, or tribal land with the majority of the land located within a selected priority watershed; (2) the applicant must be in compliance with highly erodible land and wetlands provisions of the Food Security Act of 1985 and generally must have control of the land for the life of the contract; and (3) the applicant must share in the risk of producing any crop or livestock and be entitled to a share in the crop or livestock available for marketing from the operation. The farm bill establishes three tiers or levels of participation. Each tier has a specified contract period and an annual payment limit and calls for a plan addressing resources of concern (as further delineated in NRCS regulations), as indicated in table 1. In addition to these tiers, NRCS’s program regulations and sign-up announcements establish enrollment categories and subcategories. Under NRCS regulations, enrollment categories may be defined by criteria related to resource concerns and levels of historic conservation treatment, including a producer’s willingness to achieve additional environmental performance or conduct conservation enhancement activities. For the fiscal year 2005 sign-up, five enrollment categories (A through E) were used for cropland, pasture, and rangeland. For example, for cropland, the enrollment categories were defined by various levels of soil conditioning index scores and the number of stewardship practices and activities in place on the farm for at least 2 years. All applications that met the sign-up criteria were placed in an enrollment category, regardless of available funding. NRCS then funded all eligible producers enrolled in category A before funding producers in category B and subsequent categories until available funding was exhausted. If an enrollment category could not be fully funded, then the subcategories were used to determine application funding order within a category. For the fiscal year 2005 sign-up, 12 subcategories were used. These subcategories included factors such as whether (1) the applicant is a limited resource producer or a participant in an ongoing environmental monitoring program; (2) the agricultural operation is in a designated water conservation area or aquifer zone, drought area, or nonattainment area for air quality; or (3) the agricultural operation is in a designated area for threatened and endangered species habitat creation and protection. The producer’s CSP contract identifies the type and amount of program payments that a producer will receive. NRCS has established criteria for calculating each of the four components of the program payment. For example, the stewardship component is based on the number of acres enrolled in CSP, the stewardship payment rate established for the watershed, and reduction factors based on the tier of enrollment. At a minimum, all CSP contract payments include amounts for the stewardship and existing practice components. To be eligible to participate in CSP, the producer must develop a conservation security plan (also known as a conservation stewardship plan) that identifies the land and resources to be conserved; describes the tier of conservation security contract and the particular conservation practices to be implemented, maintained, or improved; and contains a schedule for the implementation, maintenance, or improvement of these practices. This plan must be submitted to and approved by NRCS. According to NRCS, about 1.8 million farmers and ranchers nationwide are potentially eligible for CSP. However, the agency has chosen a staged approach to implementing CSP, based on limiting program sign-ups to selected, priority watersheds each year. In part, this reflects CSP’s newness. As with any new program, there have been birthing and growing pains as the agency has grappled with developing program regulations, training its staff, outreaching to producers and stakeholder groups, and adjusting program implementation based on lessons learned from one program sign-up year to the next. NRCS also chose a staged approach in light of limited program funding—Congress authorized caps for total CSP funding in fiscal year 2004 and for salaries and expenses of personnel to carry out CSP in fiscal years 2005 and 2006—and the statutory limitation on the amount of CSP funding that can be used for technical assistance— NRCS cannot incur technical assistance costs in excess of 15 percent of the funds expended in a given fiscal year for CSP. According to NRCS, focusing on priority watersheds reduces the administrative burden on applicants and the costs of processing a large number of applications that cannot be funded. In addition, the agency notes that everyone in the United States lives in a watershed and, because each year producers in approximately one-eighth of the nation’s 2,119 watersheds will be eligible for the sign-up, all eligible producers will have the opportunity to participate over an 8-year period, subject to available funding. NRCS held the first CSP sign-up in fiscal year 2004. Nearly 2,200 farmers and ranchers participated in the program that year with contracts covering nearly 1.9 million acres in 18 watersheds in 22 states. Producer payments totaled about $34.6 million in fiscal year 2004, and NRCS used about $5.9 million for technical assistance. For fiscal year 2005, NRCS approved over 12,700 CSP contract applications, covering nearly 9 million acres in 220 watersheds in 50 states and Puerto Rico. These 220 watersheds included the 18 watersheds covered by the fiscal year 2004 sign-up. Producer payments totaled about $171.4 million (including payments for contracts approved in 2004) in fiscal year 2005, and NRCS used about $30.2 million for technical assistance. In January 2006, USDA announced that it plans to offer CSP contracts to producers in an additional 60 watersheds during fiscal year 2006, with participants receiving an estimated $220 million (including payments for contracts approved in 2004 and 2005). More detail on the CSP payments made in fiscal years 2004 and 2005 is summarized in appendix II, including information on these payments by tier, payment type, and enhancement type. Figure 1 shows the watersheds included in the fiscal year 2004 and fiscal year 2005 CSP sign-ups. In general, NRCS implements CSP by (1) offering periodic sign-ups in specific, priority watersheds across the Nation; (2) requiring producers to complete a self-assessment, including a description of conservation activities on their operations, to determine their eligibility for the program; (3) scheduling interviews with eligible producers in local NRCS field offices to review the producers’ applications; (4) determining which program tier and enrollment category an eligible producer may participate in; (5) selecting the enrollment categories to be funded for CSP contracts; (6) developing conservation security plans and contracts for the producers selected; and (7) making the associated payments. Appendix III provides a flowchart describing the CSP application and enrollment process in more detail. Applicants may submit only one application for each sign-up. Producers who are participants in an existing CSP conservation stewardship contract are not eligible to submit another application. Many stakeholders refer to CSP as an “entitlement” program. However, the farm bill does not refer to the creation of any entitlements under the program. Moreover, the legislation provides the Secretary of Agriculture with discretion to establish additional eligibility requirements, provides that the Secretary must approve a producer’s conservation security plan before entering into a conservation security contract, and only states that payments “may” be received under three tiers of contracts. Thus, CSP is not an entitlement program. Finally, many proponents of CSP maintain that this program will help U.S. producers stay competitive in the world market while providing significant societal environmental benefits. These proponents note that traditional farm commodity programs tend to distort trade and will thus face increasing pressure for reduction or elimination in the next round of World Trade Organization talks. However, they note, “green payments” programs such as CSP that are designed to promote conservation and stewardship of natural resources on working lands are more likely to survive in these talks. They also maintain that several European countries are far ahead of the United States in using green payments programs to provide financial assistance to their producers while promoting conservation and environmental stewardship. CSP is generally regarded as the most comprehensive green payments program developed in the United States, primarily because CSP promotes integrated, whole-farm planning for conservation. Information on other USDA conservation programs is presented in appendix IV. Various factors explain why CBO and OMB estimates of CSP costs generally have increased over time. Of most importance, CBO and OMB officials indicated that little information was available regarding how the program would be implemented at the time of its inception in May 2002. Subsequent estimates have been better informed because USDA had developed and implemented program regulations and had data on the number of participants from program sign-ups. In addition, increases in estimated CSP costs also can be attributed to revising the time frames on which the estimates were based. In general, this involved replacing estimates from earlier years during which the program was not operational, or minimally operational, with later years during which the program is expected to be more fully operational. Over time, CBO and OMB each made several estimates of CSP costs for specified 10-year periods, and these estimates generally increased. CBO and OMB developed these estimates as part of their responsibilities for budget scoring (also known as scorekeeping). These responsibilities are discussed in appendix V. As reflected in figure 2, CBO and OMB estimates generally increased during the period 2001 through 2006, although at times the estimates dropped because of legislative actions to cap or limit CSP funding. Appendix VI also provides a more detailed time line of legislative actions and CBO and OMB 10-year estimates of CSP costs during the period 2001 through 2006. As shown in the figure, CBO made its first estimate of CSP costs—$3.7 billion for fiscal years 2002 through 2011—in December 2001, about 5 months before the farm bill was enacted (May 13, 2002). At the time, CBO based its estimate on the Senate’s version of the farm bill; the House of Representative’s version of the farm bill did not include provisions for CSP at that time. In early May 2002, just before the farm bill’s enactment, CBO estimated CSP costs to be $2 billion for the same 10-year period. CBO officials cited changes in the final bill’s provisions as the basis for the reduction in its estimate. They also cited an agreement they stated had been reached by members of the Senate Agriculture Committee that only $2 billion of the new funds to be made available for the farm bill’s conservation title would be used for CSP. The farm bill, as enacted, does not specifically include a $2 billion limit; however, it does include language that CBO officials said would result in reducing program costs to about $2 billion. OMB also made its first estimate of CSP costs—$5.9 billion for fiscal years 2002 through 2011—in May 2002, soon after the farm bill’s enactment. OMB officials said that, although they were aware of an agreement reached in the Senate to limit CSP funding to $2 billion, because this limit was not included in the final legislation, they disregarded it in making their cost estimate. As a result, OMB’s cost estimate was nearly three times larger than CBO’s estimate, although both estimates were made in May 2002, were based on the same farm bill provisions, and covered the same 10-year period. As indicated by the figure, subsequent CBO and OMB estimates of CSP costs were more similar and generally increased, except in cases where one or both agencies’ estimates reflected legislative actions to cap or limit CSP funding. For example, in January 2003, CBO estimated CSP costs to be $7.8 billion for the 10-year period fiscal years 2004 through 2013. In February of that year, Congress enacted legislation that capped CSP funding at approximately $3.8 billion through fiscal year 2013 in order to, according to OMB, generate savings for drought disaster assistance. The following month, in light of this cap, OMB estimated CSP costs to be $3.8 billion for fiscal years 2004 through 2013. However, in January 2004, Congress repealed the $3.8 billion cap. As a result, subsequent OMB and CBO estimates increased substantially. Congress acted again to cap CSP funding in October 2004, passing legislation to 1imit the program’s funding to approximately $6 billion for the 10-year period fiscal years 2005 through 2014. This action was taken to offset emergency supplemental appropriations for hurricane disaster assistance. Later that month, because of the cap, OMB estimated CSP costs to be $6 billion for the same period. However, in January 2004, about 9 months earlier, OMB had estimated the costs for this 10-year period to be $9.7 billion. In 2005, both agencies estimated CSP costs to be $6.7 billion for the 10-year period fiscal years 2006 through 2015. In large measure, these estimates reflected the $6 billion legislative cap covering fiscal years 2005 through 2014. However, that cap was scheduled to expire at the end of fiscal year 2014, meaning the estimated costs for fiscal year 2015 were not subject to a cap. In February 2006, Congress repealed the $6 billion cap, replacing it with caps of $1.954 billion for fiscal years 2006 through 2010 and $5.650 billion for fiscal years 2006 through 2015. The estimate made by OMB in January 2006—$6.2 billion for fiscal years 2007 through 2016—anticipated this change. CBO’s March 2006 estimate for fiscal years 2007 through 2016 was $6.4 billion. According to CBO and OMB officials, the primary reason for increases in their estimates of CSP costs over time is that subsequent estimates have been better informed. Specifically, subsequent estimates have been better informed by USDA’s development and implementation of program regulations and data from the results of program sign-ups. As a result, these estimates more accurately capture program costs, resulting in higher estimates. At CSP’s inception in May 2002, little information was available about how it would be implemented and the expected level of producer participation. CBO and OMB officials noted that the farm bill provided a basic framework for CSP and only a very limited basis for cost estimation, giving USDA wide discretion on how to implement the program. Consequently, these officials had to rely on their professional judgment and past experience with estimating costs when making assumptions about key aspects of CSP, such as the level of participation, number of acres enrolled, land rental rates, and the amount and types of payments made. However, according to CBO and OMB officials, CSP’s uniqueness made this more difficult as these officials had not made cost estimates for a similar program in the past. Later, NRCS’s development of CSP regulations provided key information on how the program would be implemented. In this regard, NRCS issued an advance notice of proposed rulemaking in February 2003; a proposed rule in January 2004; an interim final rule in June 2004; and an amended interim final rule in March 2005. For example, the proposed rule indicated that NRCS planned to limit enrollments to specific sign-up periods rather than allow continuous sign-ups; limit CSP enrollment to producers in selected, priority watersheds rather than offer nationwide enrollment for a given sign-up; and prioritize funding by way of enrollment categories to ensure that producers with the highest commitment to conservation are funded first. The amended interim final rule incorporates each of these elements. In addition, CBO and OMB officials had informal conversations with NRCS officials to obtain information on how the agency intended to implement the program. For example, CBO officials said that they learned that NRCS anticipated program participation would be greater than it originally expected and that enhancement payments would be a more important component of total producer payments than originally planned. OMB also reviewed and commented on NRCS’s proposed and interim final rules before their publication in the Federal Register. And CBO and OMB officials indicated that they conferred with one another from time to time to discuss issues related to estimating CSP costs, although the agencies arrived at their estimates independently. Finally, CBO and OMB officials stated that after making their initial CSP cost estimates at the program’s inception, they had more time to develop subsequent estimates, including more time to gather and consider program implementation information. They also said that their future estimates of program costs will be even better informed as more data become available from each annual CSP sign-up, including data on program participation and the mix of payments made by tier and type. CBO and OMB officials also attributed increases in their CSP cost estimates to revisions in the time frames on which the estimates were based. In making their initial estimates in May 2002, CBO and OMB took into account a time lag assumed for program development and implementation by NRCS, which included the time needed for rulemaking and public comment, training NRCS field staff, and outreach to producers and stakeholder groups. Thus, these initial estimates, covering the 10-year period fiscal years 2002 through 2011, included years in which the program was either not expected to be operational, such as fiscal years 2002 and 2003, or minimally operational, such as fiscal year 2004. For example, in CBO’s May 2002 estimate, the costs associated with these first 3 fiscal years totaled only $22 million. In contrast, CBO’s March 2004 estimate, covering a later 10-year period, fiscal years 2005 through 2014, assumed the program would be fully operational in each of these years. CBO’s cost estimates for the three additional fiscal years—2012, 2013, and 2014—totaled $3.1 billion. Thus, the substitution of fiscal years 2012 through 2014 in the latter estimate for fiscal years 2002 through 2004 in the earlier estimate amounted to an increase of more than $3 billion and helps to explain, in part, why the subsequent estimate was greater. Table 2 provides further information on CBO estimates of CSP costs for various 10-year periods during fiscal years 2002 through 2016. A similar pattern can be seen with OMB’s estimates. OMB’s May 2002 estimate, covering the 10-year period fiscal years 2002 through 2011, included fiscal years 2002, 2003, and 2004, years in which the program was assumed not to be implemented or only minimally implemented. OMB’s estimate for these 3 fiscal years was $98 million. In contrast, OMB’s January 2004 estimate, covering a later 10-year period, fiscal years 2005 through 2014, included three additional years, fiscal years 2012, 2013, and 2014. OMB’s estimate for these years was $4.049 billion. Thus, the substitution of fiscal years 2012 through 2014 in the latter estimate for fiscal years 2002 through 2004 in the earlier estimate amounted to an increase of about $3.95 billion and helps to explain, in part, why the subsequent estimate was greater. Table 3 provides further information on OMB estimates of CSP costs for various 10-year periods during fiscal years 2002 through 2016. The farm bill provides USDA general authority to control CSP costs. While USDA’s NRCS has established several cost control measures under this statutory authority, its efforts to restrict program spending could be improved by addressing (1) weaknesses in internal controls used to ensure the accuracy of program payments and (2) inconsistencies in the wildlife resource criteria used by NRCS state offices to determine producer eligibility for Tier III, the highest CSP payment level. Furthermore, because of inconsistencies in wildlife resources criteria, NRCS cannot ensure that CSP is achieving its intended wildlife habitat benefits. The farm bill establishes some eligibility requirements for CSP but gives USDA the authority to establish additional requirements that would enable it to control CSP costs, even absent legislative caps on CSP funding. For example, the farm bill establishes some producer and land eligibility requirements for CSP but also states that a payment under CSP “may” be received under three tiers of conservation contracts and that the Secretary of Agriculture “shall” determine and approve the minimum eligibility requirements for each tier—giving USDA the authority to establish additional eligibility requirements that would enable it to control program participation and, therefore, CSP costs. This provision, for example, gives the Secretary discretion to establish a tier eligibility requirement that a producer be located within a specified watershed. The Secretary also must approve a producer’s conservation stewardship plan—as meeting both the statutory eligibility requirements and any tier requirements—for the producer to be eligible to participate in CSP. In addition, the Secretary must ensure that the lowest cost conservation practice alternative is used to fulfill the purposes of the plan. Furthermore, the farm bill sets a payment limit for each tier level ($20,000 for Tier I; $35,000 for Tier II; and $45,000 for Tier III) but, in stating that payments shall be determined by the Secretary and shall not exceed such amounts, provides discretion to the Secretary to further limit the payment amounts. Under the statutory authority provided by the farm bill, NRCS has implemented a number of CSP cost control measures to restrain program spending, primarily by either restricting CSP enrollment or limiting payments to individual producers. For example, NRCS restricts CSP participation by limiting program enrollment each year to producers in specified, priority watersheds. In addition, NRCS limits annual stewardship payments to 25, 50, and 75 percent of the maximum amount that the farm bill allows for Tiers I, II, and III, respectively. Key cost control measures—found either in the farm bill, in CSP regulations, or in the program sign-up notice—in place for the fiscal year 2005 CSP sign-up are described in table 4. Some fiscal year 2004 CSP contract payments exceeded applicable payment limits established in the farm bill. As discussed, the farm bill limited annual contract payments to an individual or entity to $20,000 for Tier I; $35,000 for Tier II; and $45,000 for Tier III. However, we found that 409 (19 percent) of the 2,180 fiscal year 2004 CSP contract payments exceeded these limits. Specifically, 95 (12 percent) of Tier I payments exceeded $20,000; 209 (24 percent) of Tier II payments exceeded $35,000; and 105 (21 percent) of Tier III payments exceeded $45,000. (Tables 12, 13, and 14 in app. II show the distribution of fiscal year 2004 contract payments for Tiers I, II, and III, respectively.) According to NRCS officials, these contract payments exceeded the statutory limits because they included an “advance” enhancement payment component. These officials noted that NRCS did not intend for this advance component to be included in the annual contract payment limit because it was a one-time payment. Furthermore, they said that any producer who received an advance enhancement payment would have that payment (generally limited to $10,000) offset through deductions over the remaining years of that producer’s CSP contract. For example, for a producer whose contract had 9 remaining years, NRCS would deduct one- ninth of the advance enhancement payment in each of these years. Thus, over the life of a contract, no producer would receive more than the maximum total possible payment (e.g., $450,000 over 10 years for a Tier III contract). NRCS officials explained that for the fiscal year 2004 CSP sign- up, NRCS, using its borrowing authority, obtained the maximum amount of funding available, or $41.443 million. However, because of lower than anticipated producer participation in CSP that year, NRCS did not need all of this money to make annual contract payments to producers. NRCS decided to use the remaining amounts—about $13.6 million—to make a one-time advance enhancement payment to most (2,070 of 2,180) of the producers enrolled in CSP that year. In addition, according to NRCS officials, in subsequent years, the offsetting deductions made for these fiscal year 2004 advance enhancement payments would result in more funding being available for new CSP contracts. We plan to pursue with USDA’s Office of General Counsel the availability of remaining CSP funds for advance enhancement payments that, when included with annual contract payments, exceed the statutory payment limits. In addition to the cost control measures in the farm bill and CSP regulations, USDA and NRCS have established internal controls that help to ensure the accuracy and appropriateness of payments made through agricultural conservation programs, including CSP. These controls, also referred to as management controls, include the organizational policies and procedures used to reasonably ensure that (1) programs achieve their intended results; (2) resources are used consistent with agency and departmental missions; (3) programs and resources are protected from waste, fraud, and mismanagement; (4) laws and regulations are followed; and (5) reliable and timely information is obtained, maintained, reported, and used for decision-making. (More specific information on USDA and NRCS internal controls is presented in app. VII.) However, recent reviews of these internal controls done by NRCS’s Oversight and Evaluation (O&E) Staff and the USDA Inspector General raise concerns regarding the adequacy of some of these controls to preclude improper payments being made under CSP. Although NRCS has established internal control guidance for CSP, implementation of these controls has sometimes been criticized. For example, in reviews it conducted in 2005, NRCS’s O&E staff found problems with several aspects of the agency’s implementation of CSP, including its implementation of some internal controls. (We examined draft reports related to these reviews in January 2006; NRCS considers the information contained in these drafts to be predecisional and subject to change pending management review and the agency’s preparation of management action plans describing its response to the reports’ recommendations.) In assessing internal controls, the O&E staff conducted work at NRCS field offices located in 18 watersheds (in 13 states) that were eligible for either the fiscal year 2004 or fiscal year 2005 CSP sign-up. Among other things, the staff found weaknesses in quality assurance and case file documentation. For example, the staff found that 12 of 13 NRCS state-level Quality Assurance Plans reviewed did not include specific CSP components such as those related to conservation planning and application, that NRCS’s Conservation Programs Manual (sec. 518.75 (b)) states must be included. In addition, the staff found that 33 of 55 fiscal year 2004 CSP contracts studied had not had a contract review. The Conservation Programs Manual (sec. 518.101) provides that “the designated conservationist will review the contract annually and document that the provisions of the contract are followed.” According to the O&E staff, the absence of a contract review could result in payments being made for enhancements that are not being done or not yet completed as scheduled in the producer’s conservation security plan. Regarding case file documentation, the O&E staff found that many conservation stewardship plans were missing components. For example, most plans included components such as maps and map attribute information, but information needed to evaluate the effectiveness of a plan in achieving its environmental objectives was either missing or incomplete in up to 60 percent of the plans. The preparation of conservation stewardship plans is required by the farm bill and, according to the Conservation Programs Manual (sec. 518.70), this plan “is the basis for a conservation stewardship contract.” In general, a plan identifies the objectives for the associated contract, the time frames for implementing new practices, enhancements that will impact payment levels over the life of the contract, and additional measures needed to move to a higher tier level. In light of these findings, O&E staff offered several tentative recommendations related to revising NRCS’s written guidance documents, developing a checklist for staff to use in compiling conservation stewardship plans, improving management oversight, and providing staff further training. In addition, other aspects of NRCS’s internal controls have been criticized. For example, in January 2005, the USDA Inspector General reported that (1) NRCS had neither identified the internal control measures in place to preclude, or detect in a timely manner, improper payments for the programs it administers, including CSP, nor did it know if the controls were in operation and (2) NRCS had not conducted risk assessments of potential improper payments for these programs. In addition, USDA reported several material weaknesses to its financial and accounting systems and information security program in its fiscal year 2005 Performance and Accountability Report. See appendix VII for further discussion of these matters. In its planning documents, NRCS notes that the nation made a massive financial commitment to conservation in the 2002 farm bill and thus NRCS must manage the taxpayers’ money well, including documenting how these funds have been spent. Among other things, the agency said it would develop processes to better record obligations and improve the accuracy and timeliness of its financial information. However, until actions are completed to correct these internal control problems, NRCS cannot be certain that contract payments information for CSP and other programs is accurate. This increases the potential for improper payments being made under these programs. NRCS’s efforts to control program spending may be weakened by inconsistencies in NRCS state offices’ determinations of producer eligibility for the three CSP payment tiers. Several NRCS state officials expressed concerns about such inconsistencies, suggesting that some state offices may have been more lenient than their own state in determining producers’ eligibility for CSP payments. In particular, several NRCS state officials had specific concerns about inconsistencies in the wildlife habitat assessment criteria that NRCS state offices use, in part, to determine applicant eligibility for Tier III, the highest CSP payment level. The farm bill requires a producer to meet minimum standards for all applicable resource concerns on the entire agricultural operation, which would include wildlife habitat, to be eligible for Tier III payments. For the fiscal year 2004 CSP sign-up, NRCS provided limited guidance to its state offices that were responsible for developing the assessment criteria that were used to determine whether a producer met minimum standards for protecting wildlife habitat. However, a post-sign-up debriefing of NRCS headquarters and state officials to identify lessons learned indicated that the state offices developed assessment criteria that were extremely variable, contributing to significant differences in the rate of CSP participation and payments at the Tier III level among the various watersheds included in the sign-up. According to documentation based on this debriefing, this variability in assessment criteria was attributed to (1) differences in the type of assessment criteria used (i.e., some states used targeted species assessment criteria while others used general wildlife assessment criteria) and (2) differences among the states’ general wildlife assessment criteria. Table 5 shows the Tier III participation and payment rates for each of these watersheds. As shown in the table, the percentage of total contracts in Tier III varied from a low of 0 in one watershed to a high of 79 percent in another watershed. Part of this variation may be attributed to differences in land uses among watersheds. For example, land that is in an intensive agricultural use, such as cropland, tends to be less suitable as wildlife habitat than land that is not used intensively such as rangeland. However, even among watersheds in which CSP enrollments were over 90 percent cropland—Auglaize, Blue Earth, East Nishnabotna, Kishwaukee, Little, Little River Ditches, Lower Chippewa, Raystown, and St. Joseph—the percentage of total contracts in Tier III varied from 0 to 58 percent, and the percentage of payments going to Tier III contracts ranged from 0 to 75 percent. In response to the variation in wildlife habitat assessment criteria used during the fiscal year 2004 sign-up and related differences in Tier III participation, NRCS’s Wildlife Team, responsible for technical matters concerning wildlife habitat under CSP, developed national guidance that NRCS state offices were to follow in creating their criteria for subsequent sign-ups. The national guidance was provided to state office staff during training sessions held before the fiscal year 2005 CSP sign-up. The Wildlife Team developed the national guidance based on NRCS’s CSP regulations that state that the minimum requirement for wildlife habitat is considered achieved when a producer’s level of treatment and management results in an index value of at least 0.5 based on a general or species- specific habitat assessment guide. A Wildlife Team official said this 0.5 index value corresponds to 50 percent of the potential habitat for a given land area and stated that the national guidance was developed accordingly. He noted that, because habitat needs differ across the nation, it is not possible to develop one set of criteria that would work for the whole country and apply to all situations. Because of these differences, the national guidance instructs each state to define its own minimum criteria for each of the listed wildlife resource components in the national guidance based upon the state’s own unique set of conditions. For example, for rangeland, the national guidance identifies vegetative height management during nesting season as a component that must be addressed and instructs state offices to define the minimum foliage height of grasses. Despite this flexibility, the official said that the purpose of this national guidance was to avoid the wide variations in criteria that led to large discrepancies and inconsistencies in the fiscal year 2004 sign-up. According to the national guidance, NRCS state offices’ general wildlife habitat assessment criteria for cropland must address the following six wildlife resource components: Amount of noncrop vegetative cover. These areas include woodlots, windbreaks, field corners, hedgerows, grassed areas, wetlands, or riparian areas managed for wildlife. According to the guidance, state offices must define a minimum percentage of noncrop vegetative cover within or adjacent to offered cropland fields. A state office’s criteria for this component must be met for each cropland field. Size of noncrop vegetative cover. State offices must define a minimum dimension for these areas. According to a Wildlife Team official, an example is a minimum width. Interspersion of noncrop vegetative cover. State offices must define a minimum distance from all parts of cropland fields to noncrop vegetative cover. Condition of noncrop vegetative cover. Minimum standards for the composition and structure of the noncrop vegetative cover must be defined. Examples include minimum plant heights and restrictions on mowing. Conditions for lakes, ponds, wetlands, and streams. Minimum conditions, such as buffer widths, must be defined Crop residue management. Minimum levels of crop residue must be defined. According to Wildlife Team officials, the national guidance instructed each NRCS state office to develop wildlife habitat assessment criteria that consisted of questions corresponding to the wildlife resource components in the national guidance. For each component of the national guidance, these officials said these questions were to include specific criteria established by the state offices and were intended to determine if a CSP applicant was meeting these criteria and thus was addressing the wildlife habitat resource concern. In general, the phrasing and number of questions that state offices included in these assessment criteria, as well as the overall design of the assessment criteria, varied. For example, one state office’s assessment criteria had nine questions and required a “yes” response to each question. Another state office’s assessment criteria included six questions and required a “yes” response to each question. In reviewing the wildlife habitat assessment criteria that NRCS state offices used in the fiscal year 2005 sign-up, we found that some NRCS state offices used criteria that were inconsistent with the national guidance. For example, the design of the assessment criteria used for cropland in three states made it possible for NRCS to determine that a producer was addressing the wildlife habitat resource concern even though that producer may not have met the state criteria for each of the six resource components identified in the national guidance. Although these three state offices’ wildlife habitat assessment criteria included a question or questions that generally related to each of the national guidance’s components, the state offices required “yes” responses to only five of the seven questions listed in the assessment criteria. Thus, in effect, these states did not require producer compliance with all aspects of their state criteria or, by extension, all six components of the national guidance. A Wildlife Team official explained that although NRCS has not undertaken a review to determine whether producers have qualified for Tier III payments under this scenario, based on informal discussions with field office staff, this official concluded that some producers received such payments during the fiscal years 2004 and 2005 sign-ups. In addition, another Wildlife Team official said it was particularly problematic that a producer could receive a Tier III payment in these states without meeting the state criteria related to the amount of noncrop vegetative cover. According to this official, this component of the national guidance is particularly important for cropland because it is intensively farmed and generally unsuitable for wildlife habitat. Thus, the creation or preservation of areas of noncrop vegetative cover associated with cropland is critical to providing adequate wildlife habitat. As a result of these inconsistencies with the national guidance, producers in these states could qualify for Tier III payments even though they might not be providing habitat as intended by the national guidance and might not have qualified for Tier III payments in another state that used criteria that more closely followed the national guidance. In addition, the use of criteria that are inconsistent with the national guidance reduces NRCS’s ability to ensure that CSP is achieving its intended wildlife habitat benefits. If producers are not providing the wildlife habitat benefits intended by the national guidance, the environmental benefit achieved per dollar of CSP spending may be reduced, and CSP cost control measures would be weakened. Furthermore, some NRCS state officials said such variability in state assessment criteria could lead to pressures for more lenient payment eligibility determinations within their own states. According to these officials, when producers in a state that is conforming to the national wildlife habitat guidance see that other states are using more lenient criteria, they may pressure their NRCS state office to adopt more lenient criteria as well. NRCS Wildlife Team officials agreed with our assessment that some NRCS state offices used wildlife habitat assessment criteria for the fiscal year 2005 sign-up that were not consistent with the national guidance. In addition, these officials said that NRCS should conduct field tests of states’ criteria to ensure that these criteria are consistent with the national guidance and to determine the extent to which Tier III contracts provide adequate wildlife habitat benefits. However, they cited time constraints as the primary reason that states’ criteria have not been field tested and they indicated, as of February 2006, that NRCS does not have plans to do this testing. Regarding reasons why some state offices have not developed criteria consistent with the national guidance, these officials noted that some state office officials hold the view that CSP is a working lands program and, therefore, should not place too much emphasis on wildlife habitat or force a producer to take land out of production in order to create the habitat needed to qualify for a Tier III payment. Some of the state officials we contacted corroborated this point. In addition, the Wildlife Team officials noted that some state office officials might not have understood what guidance they were supposed to follow during the fiscal year 2005 sign-up because NRCS’s Conservation Programs Manual—the principal source of guidance for NRCS field office staff for implementing conservation programs—had no explicit reference to the national guidance. Accordingly, the Wildlife Team officials said they had recommended to NRCS’s programs office that a reference to the national guidance be included in the manual. They opined that inclusion of this reference would emphasize the importance of the national guidance to the agency’s field staff. Finally, some NRCS state officials also expressed concerns about other inconsistencies among state offices in determining producer eligibility for certain CSP payments. In particular, they cited inconsistencies in states’ determinations that producers are sufficiently addressing water quality issues. According to NRCS officials, the agency has been aware of this issue since the fiscal year 2004 sign-up when it relied on state-based standards to determine if CSP applicants were meeting eligibility requirements for water quality concerns. In the 2005 sign-up, to increase consistency, NRCS required its state offices to develop water quality checklists based on national criteria to assess applicant eligibility regarding water quality issues. These checklists were to address all critical water quality concerns, including those related to nutrients, pesticides, and sediment. In the 2006 sign-up, to further increase consistency, NRCS developed a national water quality eligibility “tool” that uses indices and scales to achieve an overall water quality assessment rating for each applicant. Using the tool, NRCS assigns points for an applicant’s current conservation activities and the level of water quality protection those activities provide. The farm bill and CSP regulations include various measures that reduce the potential for duplication between CSP and other USDA conservation programs. For example, as authorized in the statute, CSP can reward producers for conservation actions that they have already taken, whereas other programs generally provide assistance to producers to encourage them to take new actions intended to address conservation problems on working lands or to idle or retire environmentally sensitive land from production. In addition, USDA regulations establish higher minimum eligibility standards for CSP than exist for other programs, helping to differentiate the applicant pool for CSP from these programs. However, the possibility remains that producers could receive duplicate payments for the same conservation action from CSP and other programs, and such duplication has occurred. In addition, NRCS does not have a comprehensive process to preclude or identify such duplicate payments. CSP operates under a number of statutory provisions that distinguish it from other USDA conservation programs and make duplicate payments less likely. Specifically, the farm bill explicitly prohibits duplicate payments under CSP and other conservation programs for the same practices on the same land. provides incentives to producers, through CSP’s Tier III payments, to address all applicable resource concerns on entire agricultural operations (i.e., whole-farm planning). provides that CSP may reward producers for maintaining conservation practices that they have already undertaken, whereas other programs generally provide assistance to encourage producers to take new actions to address conservation problems on working lands or to idle or retire environmentally sensitive land from agricultural production. establishes several types of CSP payments—stewardship, existing practice, and enhancement payments—that are unique to CSP and not offered under other programs. In addition, other farm bill provisions reduce potential duplication by prohibiting certain payments from being made through CSP. For example, CSP payments cannot be made for construction or maintenance of animal waste storage or treatment facilities or associated waste transport or transfer devices for animal feeding operations. conservation activities on lands enrolled in the Conservation Reserve Program, the Wetlands Reserve Program, and the Grassland Reserve Program. Furthermore, if a producer receives payments under another program— such as a commodity price support program—that are contingent on the producer’s compliance with requirements for the protection of highly erodible land and wetlands, the farm bill only authorizes a CSP payment on that land for practices that exceed those requirements. In addition to farm bill provisions that reduce potential duplication, a number of NRCS regulatory measures and procedures further distinguish CSP from other USDA conservation programs. These include the following: NRCS’s CSP regulations and Conservation Programs Manual elaborate on statutory provisions that prevent producers from receiving payments under CSP for the same practice on the same land. For example, the manual states that a CSP participant may not receive CSP cost-share funding for new conservation practices that were applied with financial assistance from other USDA conservation programs. In addition, the manual states that a participant may not receive a CSP payment for enhancement activities if the participant is also earning financial assistance payments through other programs for the same conservation practice or action on the same land during the same year. CSP regulations establish higher minimum eligibility standards for CSP than exist for other programs, helping to differentiate the applicant pool for CSP from the potential applicants for other programs. For example, to be eligible for a Tier I CSP contract, a producer must already have addressed water and soil quality to a minimum level of treatment. NRCS encourages producers who do not meet these higher standards to apply for assistance under other programs. For the 2005 sign-up, NRCS limited CSP cost-share payments for new conservation practices to 50 percent (65 percent for beginning and limited-resource producers) of implementation costs. NRCS allows cost-share payments of up to 75 percent under the Environmental Quality Incentives Program (EQIP) and the Wildlife Habitat Incentives Program (WHIP). Thus, producers have a stronger financial incentive to apply for new conservation practice payments through EQIP or WHIP rather than CSP. In addition, NRCS has limited the number of conservation practices that are eligible for funding through CSP. In any given watershed, CSP payments for new conservation practices were only offered for up to about 20 of the approximately 200 conservation practices that can be funded through EQIP. NRCS has encouraged enhancement payments for conservation actions that exceed the minimum treatment standards required for CSP eligibility. According to NRCS officials, emphasizing enhancements helps to differentiate CSP from other programs, such as EQIP and WHIP, which do not offer similar payments. As discussed, EQIP and WHIP payments generally assist producers in achieving a level of treatment that meets the minimum or nondegradation standard for a conservation activity, as defined by NRCS, which generally is less than the minimum treatment standard for CSP enhancements. Most CSP payments made in fiscal years 2004 and 2005 were for enhancements. In fiscal year 2004, enhancement payments and advance enhancement payments accounted for about 81 percent of total CSP payments. In fiscal year 2005, enhancement payments were 81 percent of total CSP payments. CSP regulations and procedures also provide financial incentives for enhancements. Specifically, in order to receive a larger payment up to the full total payment allowed under each enrollment tier, a producer must agree to implement enhancements because of the limits on stewardship, existing practice, and new practice payments. Stewardship payments are capped under the farm bill and CSP regulations at $5,000 for Tier I, $10,500 for Tier II, and $13,500 for Tier III. Furthermore, CSP sign-up notices have limited existing practice payments to a flat rate of 25 percent of the stewardship payment for each tier and have limited new practice payments to $10,000 for each tier. As a result of these limits, the maximum total payment a producer could receive (i.e., the total of the stewardship, existing practice, and new practice payments) without an enhancement payment would be $16,250 for Tier I, $23,125 for Tier II, and $26,875 for Tier III. Therefore, in order to receive the full amount of CSP financial assistance available for an enrollment tier (e.g., $20,000 for Tier I; $35,000 for Tier II; and $45,000 for Tier III), the producer must agree to implement enhancements. In addition, to encourage participants to add new enhancements over the life of a contract, NRCS incorporated variable enhancement payments into the fiscal year 2005 CSP contracts that gradually reduce the annual payments for a contract’s base (initial) enhancements over the contract’s term. Thus, to compensate for this diminishing income, a producer would need to add new enhancements over the life of a contract. Despite farm bill and NRCS regulatory measures and procedures that lessen possible duplication between CSP and other programs, the potential for duplication still exists and has occurred with regard to CSP enhancement payments. For example, although some payments made through CSP are unique to that program, payments for new conservation practices or actions such as nutrient management can be made through CSP and other programs, creating the potential for duplicate payments. In addition, CSP payments for enhancement actions have the potential to overlap with payments under other programs for conservation practices. Regarding the latter possibility, we found a number of cases where duplicate payments had been made for CSP enhancements and conservation practices under other programs for the same conservation action on the same land during the same year. In addition, NRCS lacks a comprehensive process to identify potential duplicate payments or duplicate payments already made. Table 6 summarizes the types of conservation payments available through CSP, EQIP, and WHIP. As indicated in the table, the farm bill allows cost-share payments for the adoption of conservation practices that could be implemented through any of these programs, creating the possibility that a producer could receive duplicate payments for the same conservation practice under CSP and another program. In reviewing fiscal year 2004 contracts and payments data for CSP, EQIP, and WHIP, we did not find evidence of duplicate payments related to funding the adoption of the same conservation practice under CSP and another program on the same operation during the same year. However, the opportunity for such duplicate payments to have been made during fiscal year 2004 was very low because only four producers received CSP payments for the adoption of new conservation practices that year. NRCS officials said that, because the fiscal year 2004 contracts were approved in July 2004, the time remaining in the fiscal year was not sufficient for most CSP participants to implement new conservation practices and receive a payment. In addition, these officials said NRCS encourages producers to use programs other than CSP to obtain financial assistance for new conservation practices. As discussed, these other programs generally offer a higher cost share for new practices than offered under CSP. In the future, greater numbers of producers may receive CSP payments for new conservation practices, increasing the potential for duplicate payments. The potential for duplicate payments also exists between CSP enhancement payments and conservation practice payments made under other programs. Each year, NRCS state offices develop lists of conservation actions eligible for CSP enhancement payments in their states. NRCS headquarters officials then review and approve the states’ lists. If the reviewing officials find that a proposed enhancement includes conservation actions that do not exceed the minimum standard for the related conservation practice, as defined by NRCS, they work with the NRCS state office to revise the proposed enhancement. However, some overlap may occur because a given conservation action can have a different purpose under another program. For example, several states offer CSP enhancement payments for the use of conservation crop rotation for the purpose of breaking plant pest cycles to reduce the need for pesticide applications. At the same time, these states offer EQIP funding for the use of conservation crop rotation for the purposes of reducing soil erosion, providing wildlife cover and food, and improving soil organic matter. This overlap increases the potential for a producer to receive two payments for the same conservation action on the same land during the same year. The farm bill prohibits payments under CSP and another conservation program for the same practice on the same land. The CSP manual elaborates on this provision, stating that a participant may not receive a CSP payment for enhancement activities if the participant is also earning financial assistance payments through other programs for the same practice or activity on the same land during the same year. Our file reviews and analysis of NRCS payments data for calendar year 2004 showed that duplicate payments have occurred. Specifically, we found cases where a producer received duplicate payments from CSP and EQIP for performing the same conservation action on the same land during the same year. For example, in the course of performing limited file reviews at several NRCS field offices, we found that a producer had received a CSP enhancement payment and an EQIP conservation practice payment for the same conservation action—establishing a small grain cover crop—on the same tract of land during 2004. This producer also was scheduled to receive the same duplicate payments during 2005. Furthermore, our analysis of 2004 payments data for CSP, EQIP, and WHIP revealed other cases in which a producer received a CSP enhancement payment and an EQIP payment for performing a similar conservation action during the same year. Our analysis of these data showed that 172 (or 8 percent) of the 2,180 producers who received a CSP payment in 2004 also received an EQIP payment that year as well. None of these 2,180 producers received a WHIP payment that year. In analyzing the conservation actions funded for the 172 producers who received both CSP and EQIP payments, we initially identified 72 producers who received payments that appeared to be for similar or related conservation actions and may have been duplicates. Specifically, in aggregate, these producers received a total of 121 payments under each program that were potentially duplicates. We then selected 11 of these producers, who in aggregate received a total of 12 payments under each program, for more detailed analysis. We discussed these 12 cases with NRCS field office officials to determine if any of these payments were made for implementing the same conservation action on the same land. In 6 of the 12 cases, a producer received a CSP enhancement payment and an EQIP payment for conservation actions that appeared to be similar (e.g., CSP and EQIP payments for nutrient management). In the other 6 cases, a producer received a CSP enhancement payment based on an index score that may have increased as a result of a conservation action for which the producer received an EQIP payment. We discussed the first 6 cases—those in which a producer received a CSP enhancement payment for a conservation action and an EQIP payment for a similar conservation action—with NRCS field office officials. Based on these discussions, we determined that duplicate payments were made in 4 of these 6 cases. For example, in one instance, a producer received a CSP pest management enhancement payment of $9,160 for a conservation crop rotation. On the same parcel of land, the producer also received an EQIP payment of $795 for the same conservation action—conservation crop rotation. Regarding these 4 cases, in 2 instances, NRCS field office officials acknowledged that duplicate payments had occurred, i.e., that the producer received a CSP enhancement payment and an EQIP conservation practice payment for the same conservation action on the same land during the same year. In these cases, these officials said the duplicate payments resulted from simple error. In the other 2 cases, NRCS field office officials held the view that even though the payments were for the same conservation action, if they were made for different conservation purposes (e.g., a CSP-funded conservation crop rotation to break pest cycles and an EQIP-funded conservation crop rotation to improve soil quality), then they were not duplicates. However, the farm bill clearly prohibits payments under CSP and another conservation program for the same practice on the same land. In addition, NRCS’s Conservation Programs Manual elaborates on this provision, stating that a participant may not receive a CSP payment for enhancement activities if the participant is also earning financial assistance payments through other programs for the same practice or activity on the same land during the same year. NRCS state office and headquarters officials agreed with our interpretation that in such situations producers should not receive payments under both programs. We also discussed the other 6 cases—those in which a producer received a CSP enhancement payment based on an index score that may have increased as a result of a conservation action for which the producer received an EQIP payment in the same year—with NRCS field office officials. In 4 of these cases, a producer received a CSP soil management enhancement payment based on a soil conditioning index score while also receiving an EQIP payment for conservation practices that reduce soil erosion. For each of these cases, these officials stated that the EQIP- funded conservation practice had contributed to increasing the soil conditioning index score and, as a result, had increased the CSP enhancement payment. For example, a producer may implement an EQIP- funded soil conservation practice that is factored into the calculation of a soil conditioning index score, increasing the index score from 0.2 to 0.5. If CSP soil management enhancement payments in that producer’s state increase by $1.16 per acre for each 0.1 increase in the soil conditioning index, the producer’s enhancement payment would increase by $3.48 per acre. The NRCS field office officials we interviewed had mixed views as to whether these payments were duplicates. We believe such payments were, at least in part, duplicates. However, an NRCS headquarters official stated that such payments are not duplicates. According to this official, EQIP payments are intended to compensate producers for “input” costs associated with installing or initiating conservation actions, while CSP enhancement payments are intended to reward producers for conservation “outputs” (i.e., benefits derived from conservation actions). Therefore, the official said, such payments are not duplicates. We do not agree with this rationale. Payments for producer “input” costs under EQIP are made because of their resulting conservation “outputs,” and payments for CSP conservation “outputs” are made to compensate producer “input” costs. In other words, the programs are both compensating the same action but are doing so either before or after the fact. To receive payments from both for the same action would thus clearly be duplication. Moreover, we continue to consider such payments to be inconsistent with both the farm bill prohibition and NRCS’s guidance on duplicate payments. In the other 2 cases related to index scores, the producers received CSP enhancement payments based on a wildlife habitat management index score while also receiving an EQIP payment for conservation practices that may improve wildlife habitat. In one of these cases, the EQIP-funded conservation practice was not taken into consideration in determining the index score because the practice did not affect habitat for the species of concern, bobwhite quail. In the other case, an NRCS field office official stated that, to prevent the payment from being a duplicate, he had not included the EQIP-funded conservation practice in calculating the index score. We agreed that duplicate payments had not occurred in these 2 cases. NRCS headquarters officials stated the agency lacks a comprehensive process, such as an automated system, to either preclude duplicate payments or identify them after a contract has been awarded. Instead, these officials said that NRCS relies on the institutional knowledge of its field office staff and the records they keep to prevent duplicate payments. Several NRCS state officials noted that the field staff are familiar with the assistance that producers in their county receive under various programs and suggested that these staff would reject a CSP application for a conservation activity already financed through another program. However, reliance on the institutional knowledge of staff can be problematic, especially since NRCS reported in June 2003 that almost 50 percent of its field-level workforce would be eligible to retire in 5 years, representing a serious loss of knowledge, experience, and institutional memory as these employees are replaced with less-experienced, newly hired employees. In addition, because CSP sign-ups operate under a compressed time schedule, additional staff—who do not have knowledge of local producers’ prior and current participation in conservation programs—are often temporarily relocated from other parts of a state to assist in developing CSP contracts. These staff would not be familiar with local producers and their history of conservation program participation. A number of NRCS officials acknowledged the need for a comprehensive process to prevent duplicate payments and said NRCS is considering a modification of CSP contract information stored in the Program Contracts System (ProTracts), NRCS’s contract management information system, that would allow the agency to identify potential duplicate payments before CSP contracts are approved. For example, these officials said NRCS is considering a modification to ProTracts that would flag a planned CSP enhancement payment that may duplicate a conservation practice payment made under another program, such as EQIP. However, these officials said such a modification could require adding more detailed information on enhancement payments to ProTracts than currently exists within the system. By the same token, these officials also acknowledged a need to develop a process to efficiently identify duplicate payments—such as those we found—already being made under CSP contracts issued in fiscal years 2004 and 2005. At present, NRCS does not know the extent of these duplicate payments or their aggregate dollar value. Although the total dollar amount of duplicate payments may be relatively small at present, in the future, as the program grows to include more participants, the frequency and total dollar value of duplicate payments could become significant. Furthermore, since CSP and EQIP offer producers multiple- year contracts, these duplicate payments, if undetected, would continue in subsequent years. To the extent that duplicate payments are being made, the effectiveness of CSP and the other programs involved is undermined and, because of limited funding, some CSP enrollment categories or subcategories that otherwise would have been funded may not be funded. As a result, some eligible producers may not receive CSP payments that they otherwise qualify for and would have received in the absence of these erroneous payments. Finally, NRCS has authority to recover duplicate payments. CSP contracts, by way of reference, include a clause stating that CSP participants cannot receive duplicate payments. Under a CSP contract, as required in the farm bill, a producer agrees that on violation of any term or condition of the contract the producer will refund payments and forfeit all rights to receive payments or to refund or accept adjustments to payments, depending on whether the Secretary of Agriculture determines that termination of the contract is or is not warranted, respectively. Despite farm bill provisions and NRCS actions to control CSP costs, inconsistencies in the wildlife habitat assessment criteria used by NRCS state offices for determining producer payments in the highest CSP payment category may undermine these cost controls. Specifically, some state offices have used criteria less stringent than those outlined in the NRCS national guidance, potentially resulting in Tier III payments to producers who are not providing the wildlife habitat benefits intended by the national criteria. Based on NRCS officials’ observations and the weaknesses we found in some state offices’ criteria, we believe it is highly likely that such payments have occurred. Currently, NRCS does not systematically review and field check its state offices’ criteria so that inconsistencies with the national guidance can be detected and the agency can determine whether Tier III contracts are providing the wildlife habitat benefits intended. Furthermore, because there is no reference to the national guidance in NRCS’s Conservation Programs Manual, some NRCS state and field offices may not know what wildlife habitat assessment criteria to follow or may fail to appreciate the importance of the national guidance. In addition, despite farm bill provisions and NRCS regulations and procedures designed to prevent CSP from duplicating payments made by other conservation programs, the potential for duplication still exists, and duplicate payments for the same practice or activity on the same land have occurred. Duplicate payments reduce the effectiveness of the programs involved and, because of limited funding, may result in some producers not receiving program benefits for which they are otherwise eligible. For these reasons, NRCS also should use its authority to recover duplicate payments already made. At present, NRCS lacks a comprehensive process, such as an automated system, to identify payments that are potential duplicates before they are made. The agency also lacks an effective way to identify duplicate payments already made under existing CSP contracts. Without question, NRCS’s challenge in implementing CSP—a new, unique, and complex conservation program—has been formidable. However, we believe that factors such as the substantial increase in conservation funding authorized by the 2002 farm bill; the extent of agriculture’s continuing contribution to impaired soil, water, air, and wildlife habitat; and the importance of farmers and ranchers as stewards of the nation’s natural resources underscore the need for NRCS to manage CSP in a way that ensures consistent program implementation nationwide, achieves intended environmental benefits, and prevents duplicate payments. To improve NRCS’s implementation of the Conservation Security Program, we recommend that the Secretary of Agriculture direct the Chief of NRCS to take the following four actions: Review and field check each NRCS state office’s wildlife habitat assessment criteria to ensure that states use consistent criteria and achieve the habitat benefits intended by the national guidance; Include a reference to the national guidance for wildlife habitat assessment criteria in NRCS’s Conservation Programs Manual; Develop a comprehensive process, such as an automated system, to review CSP contract applications to ensure that CSP payments, if awarded, would not duplicate payments made by other USDA conservation programs; and Develop a process to efficiently review existing CSP contracts to identify cases where CSP payments duplicate payments made under other programs and take action to recover appropriate amounts and to ensure that these duplicate payments are not repeated in fiscal year 2006 and beyond. We provided a draft of this report to NRCS for review and comment. We received written comments from NRCS’s Chief, which are reprinted in appendix VIII. Among other things, NRCS stated that our report provides valuable information that will help NRCS to improve implementation of CSP. NRCS also provided us with suggested technical corrections, which we have incorporated into this report, as appropriate. NRCS generally agreed with our findings and recommendations and discussed the actions that it has taken, is taking, or plans to take to address our recommendations. Regarding our first two recommendations, while acknowledging that problems exist, NRCS indicated that it recently has taken or is considering corrective actions other than those suggested in our recommendations. For example, because some NRCS state offices have not fully adhered to the agency’s national guidance for wildlife habitat assessment criteria, NRCS said that it issued a national bulletin to all of its state offices during the fiscal year 2006 CSP sign-up to reemphasize the guidance that these offices must use in developing their wildlife habitat assessment criteria. However, while the promulgation of this bulletin should be helpful, we still believe that NRCS should review and field check each state office’s assessment criteria to ensure its adherence to the national guidance. In the second case, in lieu of including a reference in its Conservation Programs Manual, NRCS said that it is proposing that NRCS wildlife biologists develop a special technical note that would describe how the national guidance for wildlife habitat assessment criteria should be used by NRCS state offices. Again, while we support this step, we still believe that the inclusion of a reference in the Conservation Programs Manual to the national guidance would help to emphasize its importance to NRCS state and field-level employees. Regarding our third recommendation, NRCS indicated that other automation features will be developed and incorporated into NRCS’s contracting software to avoid duplicate payments. In the meantime, NRCS said that it had implemented other procedures to help eliminate the occurrence of duplicate payments. For example, for the fiscal year 2006 sign-up, NRCS is requiring applicants to complete a form that asks an applicant to certify whether or not they are receiving payments from another conservation program on any of the land being offered for enrollment in CSP. In addition, NRCS said it plans to revise the CSP contract appendix to include a statement about prohibitions on duplicative payments. Regarding our fourth recommendation, NRCS said that it has improved management oversight to cross-check payments made to CSP participants and participants under other conservation programs to determine if duplicative payments have been made. If duplicative payments have been made, NRCS said it has contracting procedures that can be utilized to recover the payments. We also provided a draft of this report to CBO and OMB for review and comment. These agencies provided us with suggested technical corrections, which we incorporated into the report, as appropriate. We are sending copies of this report to interested congressional committees; the Secretary of Agriculture; the Director, CBO; the Director, OMB; and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or robinsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IX. At the request of the Chairman, Senate Committee on Appropriations, we reviewed issues related to the U.S. Department of Agriculture’s (USDA) implementation of the Conservation Security Program (CSP). Specifically, we determined (1) why Congressional Budget Office (CBO) and Office of Management and Budget (OMB) cost estimates for CSP generally increased over time; (2) what authority USDA has to control CSP costs and what cost control measures are in place; and (3) what legislative and regulatory measures exist to prevent duplication between CSP and other USDA conservation programs, and what duplication, if any, has occurred. To determine why CSP cost estimates have increased, we interviewed CBO and OMB officials and reviewed documentation they provided. At each agency, we spoke with budget analysts about their agency’s estimating practices, including the types of data, assumptions, and models used to prepare cost estimates. We did not attempt to re-estimate or audit the CBO or OMB estimates or data discussed in this report. For comparison purposes, we also interviewed USDA officials, including Natural Resources Conservation Service (NRCS) and Economic Research Service officials, and reviewed documentation they provided related to NRCS’s benefit-cost assessments of CSP. NRCS prepared these assessments in conjunction with its issuance of interim final and amended interim final rules for the program, published in the Federal Register in June 2004 and March 2005, respectively. In addition, we interviewed officials at the Congressional Research Service (CRS) and reviewed documentation they provided, including CRS reports that discuss CSP cost and implementation issues. We also sought the views of other interested stakeholder organizations, such as farm, conservation, and environmental organizations, as to why the estimated costs of CSP have risen substantially. These organizations included the American Farm Bureau Federation, the National Association of Conservation Districts, the Soil and Water Conservation Society, the Sustainable Agriculture Coalition, the Theodore Roosevelt Conservation Partnership, the Wildlife Management Institute, Ducks Unlimited, and Environmental Defense. At each organization, we interviewed knowledgeable officials and reviewed documentation they provided. To determine USDA’s authority to control CSP costs and the cost control measures in place, we reviewed relevant authorizing and appropriations legislation and related legislative history. This legislation includes the Farm Security and Rural Investment Act of 2002 (the farm bill); USDA appropriations acts for fiscal years 2004, 2005, and 2006; and other legislation that capped CSP funding for the 11-year period, fiscal years 2003 through 2013, and for the 10-year period, fiscal years 2005 through 2014. In addition, we interviewed USDA officials and reviewed documentation they provided at NRCS, the Economic Research Service, the Office of the General Counsel, and the Office of Budget and Program Analysis. We also reviewed USDA’s budget explanatory notes for fiscal years 2004 through 2007; NRCS’s CSP regulations and associated public comments and benefit- cost assessments; and NRCS’s Conservation Programs Manual and related guidance pertaining to CSP implementation. Furthermore, we interviewed officials and reviewed documentation they provided at farm, conservation, and environmental organizations and at CRS. Concerning cost control measures, we also examined NRCS internal management controls (internal controls) related to ensuring that CSP cost control measures are properly and consistently implemented and that CSP contract payments are accurately determined and tracked. In particular, we focused on controls related to the agency’s (1) verification of producer- reported data used to determine program eligibility and payment levels; (2) monitoring of producer implementation of CSP contract provisions; and (3) oversight of program implementation by its field offices, including oversight of the offices’ compliance with legislative and regulatory program provisions. To do this, we interviewed NRCS officials and reviewed documentation they provided at the Operations Management and Oversight Division of the Office of Strategic Planning and Accountability. This documentation included NRCS’s General Manual and Conservation Programs Manual. It also included an internal draft study prepared by the division’s Oversight and Evaluation Staff regarding CSP’s implementation. Among other things, this draft study discusses internal controls related to the program’s application process, payment tier designation criteria, and award of contracts across tiers and designated watersheds. In addition, we reviewed USDA’s Management Control Manual and Management Accountability and Control Regulation. Furthermore, we reviewed, from USDA, relevant Office of Inspector General reports and the fiscal year 2005 performance and accountability report; and, from NRCS, the strategic plan for fiscal years 2003 through 2008; the fiscal year 2003 performance plan; performance reports for fiscal years 2003 and 2004; the fiscal year 2004 accomplishments report; and business plans for fiscal years 2004 and 2005. Finally, concerning cost controls and related internal controls, we conducted structured interviews with the relevant NRCS official(s)— usually the CSP program manager or Assistant State Conservationist in a given NRCS state office—who had primary responsibility for implementing CSP in each of the 18 priority watersheds included in the fiscal year 2004 sign-up. These 18 watersheds also were among the 220 watersheds included in the fiscal year 2005 sign-up. For these interviews, we first developed and pretested a data-collection instrument to guide the interviews. In developing the instrument, we met with officials in NRCS headquarters and reviewed documentation they provided to gain a thorough understanding of CSP implementation issues and related internal controls. To pretest the instrument, we contacted NRCS officials in Indiana and Pennsylvania who were involved in the fiscal year 2004 sign-up. After conducting the pretest, we interviewed the respondents to ensure that (1) the questions were clear and unambiguous, (2) the terms we used were precise, (3) the questions asked were independent and unbiased, and (4) answering the questions did not place an undue burden on the agency officials interviewed. On the basis of feedback from the pretests, we modified the questions as appropriate. We then conducted the structured interview by phone with NRCS officials representing each of the 18 watersheds. Table 7 lists the 18 watersheds included in the fiscal year 2004 sign-up, the lead NRCS state office for each watershed, and the number of CSP contracts awarded in each watershed. We did not conduct structured interviews with officials representing the lead offices for all 220 priority watersheds included in the fiscal year 2005 sign-up because (1) time frames for completing this sign-up and awarding contracts fell beyond the time frames for completing this portion of our work and (2) the 18 watersheds covered by our interviews were included in both the fiscal year 2004 and fiscal year 2005 sign-ups and provided wide geographic coverage. To determine what legislative and regulatory measures exist to prevent duplication between CSP and other programs and what duplication, if any, has occurred, we reviewed relevant authorizing legislation and program regulations and interviewed USDA officials and reviewed documentation they provided at NRCS, the Economic Research Service, the Office of the General Counsel, and the Office of the Inspector General. We also included questions in our structured interviews regarding potential duplication between CSP and other programs. In addition, we interviewed NRCS officials responsible for developing a plan to coordinate USDA’s land retirement and agricultural working land conservation programs to achieve the goals of (1) eliminating redundancy, (2) streamlining program delivery, and (3) improving services provided to agricultural producers. As required in the farm bill, USDA was to have submitted a report by December 31, 2005, to the Senate Committee on Agriculture, Nutrition, and Forestry and the House Committee on Agriculture that describes this plan and the means by which USDA will achieve these goals. As of March 2006, USDA was still preparing this report (USDA officials indicated that the plan and report will be one-in-the-same document). Furthermore, to identify potential duplication, we visited and conducted file reviews at NRCS field offices in two of the watersheds—Lower Chippewa and St. Joseph—that were included in the fiscal year 2004 and fiscal year 2005 sign-ups. We selected these watersheds based on several factors, including (1) their similarity to most of the other 18 watersheds included in both sign-ups in terms of the predominant type of land use (i.e., cropland), (2) the relatively high number of financial assistance contracts provided to producers in these watersheds under CSP and other USDA conservation programs, and (3) the availability of NRCS field staff to meet with us at the time. In addition, our selection of watersheds reflected a wide variation in the percent of total payments made to producers in each watershed under Tier III, the highest CSP payment category—41 percent in Lower Chippewa versus 75 percent in St. Joseph. Finally, the Lower Chippewa watershed lies entirely within the state of Wisconsin; in contrast, the St. Joseph watershed straddles three states—Indiana, Michigan, and Ohio—and thus multiple NRCS state offices were involved in implementing CSP in this watershed (Indiana was the lead office). In each watershed, we visited two NRCS county offices to review the contract files of producers who received a CSP payment in fiscal 2004 and an Environmental Quality Incentives Program (EQIP) payment or a Wildlife Habitat Incentives Program (WHIP) payment in one or more years during fiscal years 2002 through 2004. We chose the offices visited because they had made relatively large numbers of payments under these programs. We also obtained and analyzed data from NRCS’s Program Contracts System (ProTracts) electronic database regarding calendar year 2004 payments made under CSP and two other USDA conservation programs— EQIP and WHIP. In particular, we compared payment information for CSP and EQIP to identify producers who received payments under both programs that year. We then did further analysis to determine cases in which it appeared a producer had received payments under both programs for the same conservation practice or activity, on the same land, in the same year. We discussed payments received by 11 producers with NRCS officials to determine the actual extent of duplication, if any. We selected these 11 producers from a cross section of states—Nebraska, Oklahoma, Oregon, and South Carolina. In general, these states had the highest number of cases of potential duplication. In each state, we contacted NRCS field office officials in the county with the largest number of cases to discuss whether the payments were duplicates. Our choice of these producers, states, and counties was not intended to be representative for projection purposes. Finally, we interviewed officials and reviewed documentation they provided at farm, conservation, and environmental organizations, CRS, the U.S. Fish and Wildlife Service in the Department of the Interior, and the U.S. Environmental Protection Agency; conducted a literature search to identify relevant studies and articles; and attended a CSP training workshop at USDA headquarters. We conducted our review between February 2005 and February 2006 in accordance with generally accepted government auditing standards. We conducted a data reliability assessment of the fiscal years 2004 and 2005 payments data for CSP, EQIP, and WHIP and determined the data to be sufficiently reliable. For the data obtained from the other sources noted above, we did not independently verify the data, but we discussed with these sources, as appropriate, the measures they take to ensure the accuracy of these data. For the purposes for which the data were used in this report, these measures seemed reasonable. Tables 8 through 14 summarize Conservation Security Program (CSP) payments information for fiscal year 2004. Tables 15 through 18 summarize similar information for fiscal year 2005, including payments for new and existing (2004) contracts. Table 19 summarizes information on the acres enrolled in CSP by land type during these fiscal years. Although the farm bill called for the establishment of CSP in fiscal year 2003, the Natural Resources Conservation Service (NRCS) held the first program sign-up in fiscal year 2004, after developing program regulations, training its field staff, and introducing the program to producers and stakeholder groups. Information on CSP payments for fiscal year 2006 was not available at the time of our review. To develop tables 8 through 18, we used payments information from NRCS’s Program Contracts System (ProTracts). Among other things, ProTracts is used to manage and monitor the CSP application, contracting, and payment process. ProTracts is a feeder system into the U.S. Department of Agriculture’s (USDA) Foundation Financial Information System (Foundation System), the department’s official accounting system for making payments for current and prior year programs. The Foundation System records obligations and payments made and is the source of data used in financial statements for all USDA programs. In general, the payments data in the Foundation System is considered official, whereas payments data in ProTracts is considered preliminary until it has been checked, corrected, and migrated to the Foundation System. For this reason, payments data taken from these systems may not be consistent. However, in order to separate CSP payments data by tier, payment type, and enhancement type, it was necessary to use data from ProTracts; this level of detail or disaggregation was not possible using data from the Foundation System. Ijority of gricltre opertion in terhed nnonced for the ign-p? Wit for fre ign-p. Eligiility: Ind eligile? Doe prodcer re rik of prodcing crop or livetock nd entitled to re? Other (ign-pecific).YeI prodcer in complince with highly erodile lnd & mpbuster? I prodcer willing to meet complince requirement? Has prodcer completed elf-assssment inclding enchmrk inventory? I prodcer willing to complete elf- assssment? Tier III minimm requirement re met. Are ll pplicable rerce concern ddressed to the minimm level of tretment on the entire gricltre opertion? Tier II minimm requirement re met. Are ll oil quality nd wter quality concern ddressed to the minimm level of tretment on the entire gricltre opertion? Are ll oil quality nd wter quality concern ddressed to the minimm level of tretment on the prt of the gricltre opertion? I prodcer willing to ddress oil quality nd wter quality on the prt of the gricltre opertion? Plce in enrollment ctegory based on enchmrk inventory. Tier I minimm requirement re met. Refer prodcer to other pproprite conervtion progr. Develop conervtion ecrity plnd contrct, determine progrm pyment. Wit for fre ign- when rerce concern re ddressed. Implement contrct. In addition to to the Conservation Security Program (CSP), the U.S. Department of Agriculture (USDA) manages a number of other conservation programs. In general, these other programs (1) help farmers and ranchers address existing environmental problems by paying for a portion of the cost of needed conservation practices or structures; (2) keep land in farming or grazing by purchasing rights to part of the land, such as development rights through easements; or (3) idle or retire environmentally sensitive land, such as highly erodible land or wetlands, from production. In contrast, CSP is focused on operations that already have addressed environmental problems and have achieved a high level of environmental stewardship, while keeping the land in production. Producers cannot receive CSP payments and payments under another USDA conservation program for the same conservation practices or activities on the same land. However, producers can use assistance received under other USDA programs, as well as assistance received under state or private conservation programs, to arrive at a high level of stewardship necessary to participate in CSP. Table 20 describes other key USDA conservation programs. Budget scoring or scorekeeping is the process of estimating the budgetary effects of pending and enacted legislation and comparing them with a baseline, such as a budget resolution, or to any limits that may be set in law. Scorekeeping tracks data such as budget authority, receipts, outlays, the surplus or deficit, and the public debt limit. The process allows Congress to compare the cost of proposed budget policy changes with existing law in order to enforce spending and revenue levels agreed upon in the budget resolution. The congressional budget committees and the Congressional Budget Office (CBO) score legislation in relation to levels set by Congress in concurrent budget resolutions. The Office of Management and Budget (OMB) also scores legislation for the purposes of developing the President’s annual budget proposal, executing the budget, and providing the President with estimates of the budgetary impacts of pending legislation awaiting the President’s signature (or veto). Budget scorekeeping guidelines are used by the congressional budget committees, CBO, and OMB (the “scorekeepers”) in measuring compliance with the Congressional Budget and Impoundment Control Act of 1974, as amended, and the Balanced Budget and Emergency Deficit Control Act of 1985, as amended. The purpose of the guidelines is to ensure that the scorekeepers measure the effects of legislation on the deficit consistent with established scorekeeping conventions and with specific legislative requirements regarding discretionary spending, direct spending, and receipts. These guidelines are reviewed annually by the scorekeepers and revised as necessary to adhere to that purpose. The guidelines are contained in Appendix A of OMB Circular No. A-11. In general, CBO prepares costs estimates for all bills other than appropriations bills when they are reported by a full committee of either House of Congress. However, CBO also prepares cost estimates for proposals at other stages of the legislative process at the request of a committee of jurisdiction, a budget committee, or the congressional leadership. For example, CBO may prepare cost estimates for a series of bills to be considered by a subcommittee, including draft bills not yet introduced, or for amendments to be considered during committee markups. Similarly, it may prepare cost estimates for floor amendments and for bills that pass one or both Houses. For appropriations bills, CBO provides estimates of outlays that would result from the provision of budget authority. CBO also provides the budget and appropriation committees with frequent tabulations of congressional action on both spending and revenue bills so that Congress can know whether it is acting within the limits set by the annual budget resolution. After CBO cost estimates have been transmitted, they may be revised to correct errors or to incorporate new or updated information. OMB also may revise its estimates for similar reasons. The Director, CBO, transmits by letter all formal budget and mandate cost estimates of legislative proposals and all requested analyses. Scorekeeping data published by CBO include, but are not limited to, status reports on the effects of congressional actions and comparisons of these actions to targets and ceilings set by Congress in the budget resolutions. Weekly status reports are published in the Congressional Record for the Senate during the weeks it is in session and status reports for the House of Representatives are published at least monthly when the House is in session. CBO is also required to produce periodic scorekeeping reports on at least a monthly basis pursuant to section 308(b) of the Congressional Budget and Impoundment Act of 1974, as amended. OMB scorekeeping data generally are not published. December: Fnding ncpped $.7 illion (FY 2002-2011) May: Fnding ncpped $2.0 illion (FY 2002-2011) $5.9 illion (FY 2002-2011) $7. illion (FY 2004-201) $. illion 11-yer c(FY-201) $. 2004-201) $41.4 million 1-yer cp (FY 2004) led $. illion 11-yer c$9.7 illion (FY 2005-2014) $.9 illion (FY 2005-2014) $6.0 illion 10-yer c(FY 2005-2014) $6.0 illion (FY 2005-2014) $202.4 million 1-yer cp on sarie nd peronnel expen (FY 2005) January: Fnding cpped except for 2015 $6.7 illion (FY 2006-2015) March: Fnding cpped except for 2015 $6.7 illion (FY 2006-2015) $259 million 1-yer cp on sarie nd peronnel expen (FY 2006) January: Fnding cpped except for 2016 $6.2 illion (FY 2007-2016) $1.95 illion 5-yer cp (FY 2006-2010) nd $5.65 illion 10-yer c(FY 2006-2015) March: Fnding cpped except for 2016 $6.4 illion (FY 2007-2016) Farm Security and Rural Investment Act of 2002, Pub. L. No. 107-171, 116 Stat. 134 (2002). Agricultural Assistance Act of 2003, Pub. L. No. 108-7, tit. II, § 216, Stat. 538, 546 (2003). Federal agencies have been required for over 20 years to establish and assess internal controls in their programs and financial management activities pursuant to the Federal Managers’ Financial Integrity Act of 1982 and other legislative and administrative initiatives. Furthermore, the Improper Payments Information Act of 2002 requires each agency to annually review all programs and activities the agency administers and to identify those that may be susceptible to significant improper payments. To ensure that programs are managed with integrity and that program operations comply with these requirements, the U.S. Department of Agriculture (USDA) issued a departmental regulation, Management Accountability and Control, and a related departmental manual, Management Control Manual. The departmental regulation establishes departmentwide policy for internal controls. The manual discusses specific controls, including separation of duties, reconciliation of records from two sources, reconciliation of records with physical inventories, limiting access (e.g., authorizations on data systems), providing supervision, documentation of processes and procedures, written delegations of authority, analyzing and reporting on risk, and periodic reviews of performance. As a USDA agency, the Natural Resources Conservation Service (NRCS) is to follow the internal control guidance in this regulation and manual. NRCS also has established agency-specific guidance on internal controls, found principally in its General Manual and its Conservation Programs Manual. The General Manual establishes NRCS policy for effectively guarding against waste, loss, and misuse of program resources. Specifically, it outlines the process through which the agency complies with governmentwide requirements for internal management control. The Conservation Programs Manual provides specific policy, guidance, and operating procedures for implementing the Conservation Security Program (CSP) (and other programs). For example, the manual sets procedures for key program controls such as the documentation required from an applicant and the conduct of CSP eligibility determinations and contract compliance reviews. The manual also discusses specific responsibilities for program implementation as they relate to internal controls. For example, within each state, the NRCS State Conservationist is responsible for ensuring compliance with internal controls, including separation of duties related to contract approval and payment certification. In addition, this official is responsible for designating in writing the authorized NRCS representative for obligating program funds, disbursing payments, and acting as Contracting Officer. USDA’s Office of Inspector General (IG) issued a report in January 2005 that examined NRCS’s compliance with the Improper Payments Information Act of 2002. Among other things, the IG found that NRCS had not taken sufficient action to comply with the act and related guidance set forth by OMB and USDA’s Office of the Chief Financial Officer. In summary, the IG found that NRCS had neither identified the internal control measures in place to preclude, or detect in a timely manner, improper payments nor did it know if the controls were in operation. In addition, the IG noted that NRCS had not conducted adequate risk assessments of potential improper payments for the programs it administers, including CSP. According to the IG, NRCS officials stated that risk assessments were not completed because they did not have the time or personnel to perform them. These officials also said that they misinterpreted the guidance regarding what they needed to do to comply with the act. Accordingly, the IG recommended that NRCS conduct more thorough risk assessments of all programs with outlays of $10 million or more (includes CSP) and develop an estimated error rate by (1) developing criteria for identifying program vulnerabilities, (2) determining acceptable risk levels, (3) ranking the risk factors, and (4) establishing controls to ensure their timely and accurate completion. NRCS agreed with the IG’s recommendations and indicated that it would take corrective actions by April 30, 2005. In February 2006, IG officials indicated that the IG had not assessed the adequacy of these actions, including NRCS’s preparation of risk assessments. In a January 2004 report, GAO found that significant, pervasive information security control weaknesses exist at USDA, including serious access control weaknesses, as well as other information security weaknesses. Specifically, USDA had not adequately protected network boundaries, sufficiently controlled network access, appropriately limited mainframe access, or fully implemented a comprehensive program to monitor access activity. In addition, weaknesses in other information security controls, including physical security, personnel controls, system software, application software, and service continuity, further increase the risk to USDA’s information systems. As a result, sensitive data—including information relating to the privacy of U.S. citizens, payroll and financial transactions, proprietary information, agricultural production and marketing estimates, and mission critical data—are at increased risk of unauthorized disclosure, modification, or loss, possibly without being detected. Accordingly, GAO recommended that USDA establish a comprehensive security management program, including (1) ensuring that security management positions have the authority and cooperation of agency management to effectively implement and manage security programs, (2) completing periodic risk assessments for systems, (3) completing information security plans and establishing policies and procedures on the basis of identified risks, (4) ensuring that employees complete security awareness training, (5) implementing ongoing tests and evaluations of controls, (6) completing system certifications and accreditations, and (7) developing corrective action plans that clearly tie to identified weaknesses. USDA concurred, but as of January 2006, USDA had not yet fully implemented these recommendations. Furthermore, USDA’s fiscal year 2005 performance and accountability report discusses material weaknesses related to USDA’s financial and accounting systems and information security program. Among the material weaknesses identified in the report are NRCS’s application controls for its Program Contracts System (ProTracts). To address this weakness, NRCS plans to take a number of actions in fiscal year 2006, including (1) documenting the ProTracts change control process; (2) documenting changes to the ProTracts software; (3) establishing a ProTracts testing process; (4) establishing a formally approved document for the ProTracts payment specifications; and (5) establishing a schedule for the systematic reconciliation of ProTracts appropriations, obligations, and payments with amounts recorded in the department’s Foundation Financial Information System. The following are GAO’s comments on the letter from the U.S. Department of Agriculture dated April 10, 2006.
The Conservation Security Program (CSP)--called for in the 2002 farm bill and administered by the U.S. Department of Agriculture's (USDA) Natural Resources Conservation Service (NRCS)--provides financial assistance to producers to reward past conservation actions and to encourage further conservation stewardship. CSP payments may be made for structural or land management practices, such as strip cropping to reduce erosion. CSP has raised concerns among some stakeholders because CSP cost estimates generally have increased since the 2002 farm bill's enactment. For example, the Congressional Budget Office's estimate increased from $2 billion in 2002 to $8.9 billion in 2004. GAO determined (1) why CSP cost estimates generally increased; (2) what authority USDA has to control costs and what cost control measures exist; and (3) what measures exist to prevent duplication between CSP and other USDA conservation programs and what duplication, if any, has occurred. Various factors explain why estimates of CSP costs generally increased since the 2002 farm bill's enactment. Of most importance, little information was available regarding how this program would be implemented at the time of its inception in 2002. As more information became available, cost estimates rose. In addition, the time frames on which the estimates were based changed. While the initial estimates covered years in which the program was expected to be nonoperational or minimally operational, subsequent estimates did not include these years. The farm bill provides USDA general authority to control CSP costs, including authority to establish criteria that enable it to control program participation and payments and, therefore, CSP costs. For example, NRCS restricts participation by limiting program enrollment each year to producers in specified, priority watersheds. NRCS also has established certain CSP payment limits at levels below the maximum allowed by the statute. However, efforts to control CSP spending could be improved by addressing weaknesses in internal controls and inconsistencies in the wildlife habitat assessment criteria that NRCS state offices use, in part, to determine producer eligibility for the highest CSP payment level. Inconsistencies in these criteria also may reduce CSP's conservation benefits. The farm bill prohibits duplicate payments for the same practice on the same land made through CSP and another USDA conservation program. Various other farm bill provisions also reduce the potential for duplication. For example, as called for under the farm bill, CSP may reward producers for conservation actions they have already taken, whereas other programs generally provide assistance to encourage new actions or to idle or retire environmentally sensitive land from production. In addition, CSP regulations establish higher minimum eligibility requirements for CSP than for other programs. However, despite these legislative and regulatory provisions, the possibility that producers can receive duplicate payments remains because of similarities in the conservation actions financed through these programs. In addition, NRCS does not have a comprehensive process to preclude or identify such duplicate payments. In reviewing NRCS's payments data, GAO found a number of examples of duplicate payments.
The FHLBank System was established in 1932 and consists of 12 FHLBanks (see fig. 1). Member financial institutions, which typically are commercial banks and thrifts (or savings and loans), cooperatively own each of the 12 FHLBanks. To become a member of its local FHLBank, a financial institution must maintain an investment in the capital stock of the FHLBank in an amount sufficient to satisfy the minimum investment required for that institution in accordance with the FHLBank’s capital plan. In addition to the ability to obtain advances, other benefits of FHLBank membership for financial institutions include earning dividends on their capital investments and access to various products and services, such as letters of credit and payment services. As of December 31, 2009, more than 8,000 financial institutions with approximately $13 trillion in assets were members of the FHLBank System. The FHLBank System’s total outstanding advances stood at more than $631 billion. As established by statute and FHFA regulations, the FHLBanks are required to develop and implement collateral standards and other policies to mitigate the risk that member institutions may default on outstanding advances. To help do so, the FHLBanks generally apply a blanket lien on all or specific categories of a member’s assets to secure the collateral underlying the advance. In general, a blanket lien agreement is intended to fully protect the FHLBank by securing its right to take and possibly sell any or all of a member’s assets in the event it fails or defaults on its outstanding advances. In limited circumstances, FHLBanks may permit or require their members to pledge collateral under (1) a listing (specific detail) lien agreement in which the members are to report detailed information, such as the loan amount, payments, maturity date, and interest rate for the loans pledged as collateral; or (2) a delivery lien agreement, in which members are required to deliver the collateral to the FHLBank or an approved safekeeping facility. From a member’s perspective, the benefits of listing collateral in lieu of a blanket lien agreement can include better pricing terms. Some FHLBanks may require members to list or deliver collateral to better protect their financial interests in instances in which a member is in danger of failure or its financial condition begins to deteriorate. FHLBanks also seek to manage risk and mitigate potential losses by applying varying haircuts, or discounts, to collateral pledged to secure advances. To illustrate: suppose that an FHLBank member sought to pledge a single-family residential mortgage loan portfolio with a value of $100 million to secure an advance from its district FHLBank. If the FHLBank applied a haircut of 25 percent to such collateral, the member would generally be able to secure advances of up to $75 million subject to other risk-management policies the FHLBank may have established. In general, the FHLBanks’ haircut levels tend to increase based on the perceived risks associated with the collateral being pledged. As described in this report, single-family mortgages and other forms of traditional collateral generally are perceived as representing less risk than alternative forms of collateral, such as agricultural and small business loans. Since FHLBanks generally issue advances under blanket lien agreements, they may calculate a member’s total borrowing capacity by applying varying haircuts to each form of eligible collateral on the member’s books and communicating this information to the member on a periodic basis. In the event that a member institution fails, FHLBanks have a “first lien” on its assets. That is, they have priority over all other creditors, including FDIC, to obtain the collateral necessary to protect against losses on their outstanding advances. In a typical bank or thrift failure, FDIC pays off outstanding FHLBank advances in full and takes possession of the collateral on the institution’s books to help offset its losses. According to the FHLBank Office of Finance, in the 78-year history of the FHLBank System, no FHLBank has ever suffered a credit loss on an advance. The FHLBanks’ haircuts and other risk management policies are intended to mitigate potential losses; however, they also may limit some members’ interest in obtaining advances. For example, an FHLBank member may perceive that the level of haircuts applied to the collateral it pledges may unduly restrict the amount of financing it would like to obtain through advances. Administrative and other costs associated with obtaining advances also may factor into an FHLBank member’s decision making process. For example, FHLBank officials conduct on-site inspections to assess members’ collateral management practices or require members to have such practices independently audited. For some FHLBank members, the costs of such administrative procedures may outweigh the potential benefits of obtaining advances, particularly if they view the haircuts applied to the collateral as unreasonable. While the FHLBank System’s primary mission over the years has been to promote housing finance generally through its advance business, it is also required by statute and regulation to meet other specific mission requirements. For example, FIRREA authorizes both the Affordable Housing Program (AHP) and the Community Investment Program (CIP) to assist the FHLBanks’ affordable housing mission. Under AHP, each FHLBank is required to set aside 10 percent of its previous year’s earnings to fund interest rate subsidies on advances to members engaged in lending for long-term, low- and moderate-income, owner-occupied, and affordable rental housing at subsidized interest rates. In using the advances, the FHLBank members are to give priority to qualified projects, such as the purchase of homes for low- or moderate-income families or to purchase or rehabilitate government-owned housing. FIRREA also established CIP, which requires FHLBanks to provide flexible advance terms for members to undertake community-oriented mortgage lending. CIP advances may be made at the FHLBank’s cost of funds (for advances with similar maturities) plus the cost of administrative fees. Moreover, FIRREA requires FHFB (now FHFA) to establish standards of community investment or service for members of FHLBanks to maintain continued access to long-term advances. These standards include the development of a Targeted Community Lending Plan (which is designed to help the FHLBanks assess the credit needs of the communities that they serve) and quantitative lending goals that address identified credit needs and marketing opportunities in each FHLBank’s district. FHFA has safety and soundness and mission oversight for the FHLBank System and Fannie Mae and Freddie Mac. For example, FHFA is responsible for ensuring that the FHLBanks establish appropriate collateral management policies and practices to mitigate the risks associated with their advance business. From a mission standpoint, FHFA also is responsible for ensuring that the FHLBanks are in compliance with statutes and regulations pertaining to the AHP and CIP programs. While GLBA does not establish specific requirements for alternative collateral, its legislative history suggests that the FHLBanks and FHFB, and by extension FHFA, should prioritize the FHLBank System’s economic development activities through the use of alternative collateral. To carry out its responsibilities, FHFA may issue regulations, establish capital standards, and conduct on-site safety and soundness or mission-related examinations. FHFA also may take enforcement actions, such as issuing cease and desist orders, and may place an FHLBank, Fannie Mae, or Freddie Mac into conservatorship or receivership if they become undercapitalized or critically undercapitalized. Officials from the 12 FHLBanks cited several factors to help explain the minimal use of alternative collateral to secure advances in the FHLBank System. These factors include a potential lack of interest among many CFI members; the view that many CFIs belong to the FHLBank System primarily to have access to letters of credit and other products or to obtain a backup source of liquidity; and that many CFIs may have sufficient holdings of traditional collateral to secure advances. Moreover, due to the potential risks associated with alternative collateral, the 10 FHLBanks that accept it have established risk-management strategies to mitigate potential losses, which also may limit its use. In particular, the FHLBanks generally have applied higher haircuts to alternative collateral than to any other type of collateral used to secure advances. Officials from many of the 30 CFIs we interviewed said that they valued their relationships with their local FHLBanks and the products and services provided. However, officials from half of these CFIs expressed concerns about the level of the haircuts applied to alternative collateral or other FHLBank risk-management strategies. In some cases, they said the haircuts or policies limited their willingness to pledge such collateral to obtain advances. According to representatives from the 12 FHLBanks, they have ongoing member outreach programs that are intended, in part, to address members’ credit and collateral needs and the various products available to them. The FHLBank officials said that outreach activities can include telephone calls or visits to members to discuss the availability of alternative collateral and its potential use by CFI members. Some FHLBanks also have annual meetings, online product tutorials, and electronic bulletins that provide information about alternative collateral. While officials from the 12 FHLBanks said they had outreach programs in place, some officials cited the significant differences in the membership characteristics across the FHLBank System as affecting the use of alternative collateral (see table 1). For example, CFIs represent more than 80 percent of the membership and about 30 percent of the assets of the FHLBanks in Dallas and Topeka, and many of these CFI members focus on agricultural lending due to its prominence in the regional economies. While CFI assets represented a relatively small proportion, or 13 percent, of the total assets of members in the Des Moines FHLBank district, officials said that alternative collateral was of significant interest to their members due to the prominence of agricultural-related businesses in the district. In contrast, CFI assets represented a relatively small portion, or less than 10 percent, of all membership assets in the FHLBank districts of Atlanta and New York, neither of which have submitted new business activity notices to FHFA requesting approval to accept alternative collateral; and the FHLBank of Cincinnati reported no alternative collateral activity at year-end 2008. Officials from these three FHLBanks said that their memberships had not expressed an interest in pledging alternative collateral. Similarly, although CFI membership and assets also were relatively significant in the Chicago FHLBank district, officials said that their membership had not expressed much interest in using alternative collateral to secure advances. One FHLBank official noted that, given the cooperative nature of the FHLBank System, membership interest often drove the decision to make certain products and services available. Officials from several FHLBanks also said that CFIs often do not take out advances from their local FHLBank and, therefore, have no reason to use alternative collateral. Rather than taking out advances, several FHLBank officials said many CFIs derive other benefits from their membership, particularly letters of credit, and other services. The officials added that CFIs also may belong to the FHLBank System to have a backup source of liquidity in case other sources, including customer deposits or the federal funds market become unavailable or prohibitively expensive. According to some FHLBank officials, many CFIs may have sufficient traditional collateral, such as single-family mortgages and investment- grade securities, to secure advances. Officials at the FHLBanks of Boston, Cincinnati, and Pittsburgh said that they reported no or low levels of alternative collateral securing advances at year-end 2008, in part, because their members had sufficient levels of other eligible collateral. Atlanta and New York FHLBank officials said that they conducted regular analyses to determine whether any banks in their membership were collaterally constrained and, therefore would need alternative collateral to obtain an advance. Officials at these banks said since 2000, their annual analyses have determined that alternative collateral was not needed among their membership. Our analysis of FHFA, FDIC, and SBA’s Office of Advocacy data found that while most CFIs may have sufficient traditional sources of collateral to secure advances, a considerable minority of CFIs with significant holdings of alternative collateral on their books may face challenges in doing so. For example, we identified 480 CFIs with $47.3 billion in assets, as of December 31, 2009, that met the FDIC’s definition of an agricultural bank (see table 2). The FHLBanks of Des Moines and Topeka had the most agricultural CFI members and the CFIs in these two districts had the greatest amount of total assets for such lenders. Using SBA’s Office of Advocacy data we also found 326 CFIs with $102.3 billion in assets, as of September 30, 2009, that were identified as the largest small business lenders (see table 3). The number and assets of small business CFIs appeared to be more evenly distributed across the FHLBank System than agricultural CFIs. We interviewed a limited sample of 30 representatives from these agricultural and small business CFIs and discuss their perspectives on FHLBank alternative collateral policies and practices later in this report. FHFA and some FHLBank officials said that alternative collateral generally has been viewed as representing greater risks than single-family mortgages and investment-grade securities. For example, FHFA officials said that it could be difficult to establish a value for agricultural and small business loans because they generally have not been actively traded in secondary markets. In the absence of secondary markets, alternative collateral may be relatively illiquid, which means an FHLBank might face difficulties in selling such underlying collateral if a CFI failed or defaulted on its advance. As described earlier, FDIC generally pays the FHLBank the principal balance of the outstanding advances of failed members and takes possession of the underlying collateral. However, FDIC may not always follow this procedure in future bank failures and the possibility of a CFI defaulting on a loan or failing would put the FHLBank at a risk of losses, as it might be unable to sell the alternative collateral in a timely manner. In contrast, FHLBanks generally can more easily estimate values for traditional collateral because mortgages often are pooled into securities and actively traded on secondary markets. In the event a member failed or defaulted on its outstanding advances, the FHLBanks generally would be able to sell the underlying collateral that secured the mortgages or securities. In a previous report, we commented on the challenges associated with establishing values for small business loans as compared with single- family mortgage loans. Unlike mortgage lending, small business loans exhibit greater heterogeneity and more complexity. For example, although mortgage lending has become more complicated in recent years, the type of financing that prospective homebuyers seek remains fairly standardized (two general categories—fixed- or variable-rate loans) and the collateral securing mortgages, generally single-family residences, is relatively easy to understand and market. In contrast, the types of financing that small businesses typically seek can range from revolving lines of credit to term loans, and the collateral pledged against such loans also may vary widely in risk and marketability (from relatively secure real estate to less secure inventory). The 10 FHLBanks that accept alternative collateral have adopted risk- management policies intended to mitigate the perceived risks of such collateral, but which also may limit its appeal to CFIs. These FHLBanks generally apply higher haircuts to alternative collateral than to any other type of collateral that may be used to secure advances. As shown in table 4, the haircuts, or range of haircuts, that the FHLBanks apply to alternative collateral generally have been higher than for traditional forms of collateral, such as single-family mortgages or commercial real estate loans. The maximum haircut applied by an FHLBank to alternative collateral is 80 percent under a blanket lien policy, which generally means that the member could take out an advance of up to 20 percent of the value of such collateral, whereas the maximum haircut applied to commercial real estate collateral is 67 percent. Over the years, commercial real estate has been viewed as a potentially risky type of asset that has resulted in significant bank losses and numerous bank failures. The haircuts that the FHLBanks apply to alternative collateral can vary significantly. For example, the haircut on small business loans ranges from 40 to 80 percent. In contrast, two FHLBanks apply a uniform 50 percent haircut to all three types of alternative collateral (see table 4). We discuss the extent to which the FHLBanks have an analytical basis for the haircuts applied to alternative collateral later in this report. Some FHLBanks also maintain other collateral policies designed to mitigate the perceived risks associated with alternative collateral. For example, the FHLBanks have established borrowing capacity limits to further minimize the risks associated with making advances and generally apply them to all members. However, some FHLBanks have established more stringent borrowing limits for members pledging alternative collateral. For example, in addition to applying haircuts of more than 50 percent, one FHLBank limits the amount of alternative collateral that a member may pledge to 20 percent of the member’s total assets. One FHLBank sets the limit at 10 percent, in addition to its 50 percent haircut. In contrast, most other FHLBanks’ policies set borrowing capacity rates from 30 to 55 percent for members. While many CFIs may have significant traditional collateral resources to pledge to secure advances, we conducted interviews with 30 CFIs that could be constrained in their ability to obtain FHLBank advances due to their significant involvement in agricultural or small business lending. Most of the CFIs in our sample said that they valued the products and services they received from their local FHLBank. Officials from many of the CFIs said that FHLBank membership provided their institutions with access to a stable and relatively low-cost source of liquidity or provided access to a key source of backup liquidity, among other things. Officials from several CFIs that have significant concentrations in agricultural loans said that their ability to pledge such loans as collateral helped them to obtain advances and thereby expand their lending activities, because the advances generally allowed CFIs to provide borrowers with long-term financing on favorable terms. To some degree, the results from our interviews with officials from the 30 CFIs were consistent with rationales offered by FHLBank and FHFA officials about the minimal use of alternative collateral in the FHLBank System. Officials from 15 of the 30 CFIs we interviewed said that they had pledged alternative collateral to help secure FHLBank advances and officials from all but one of these reported having advances outstanding (see table 5). Officials from the other 15 CFIs said they had not used alternative collateral to secure an advance because, for example, they generally had sufficient traditional collateral to secure advances or sufficient levels of other sources of liquidity, such as customer deposits, to finance their lending activities (see table 5). As discussed previously, FHLBank officials have stated that readily available traditional collateral is one reason that many CFIs have not pledged alternative collateral to obtain advances. However, officials from 15 of the 30 CFIs we interviewed expressed concern about the haircuts applied to alternative collateral or other FHLBank policies that may limit its appeal and use. Of these 15 CFIs, officials from 11 specifically expressed concern about the level of haircuts. These officials, some of whom had not yet pledged alternative collateral to secure an advance, said that their local FHLBank’s large haircuts were a factor in their banks’ decision not to pledge alternative collateral. Some of the officials also said that their local FHLBank’s haircuts were not consistent with historical losses on small business, small farm, or small agribusiness loans. Officials from 4 of the 15 CFIs expressed concern about other FHLBank policies unrelated to haircuts, which included limitations on the types of alternative collateral accepted by some FHLBanks and limits on borrowing that some FHLBanks apply to alternative collateral. For example, an official from an agricultural CFI with $56 million in total assets said that its local FHLBank has a policy that limits the amount of an advance a CFI member could obtain using alternative collateral to 10 percent of total assets. The official characterized the policy as highly restrictive, particularly for a small agricultural lender. The FHLBank to which this lender belongs permits non-CFI members using traditional collateral to secure an advance to borrow up to 35 percent of their total assets. While FHFB held a conference in 2005 on the use of alternative collateral which may have focused the FHLBanks’ attention on the issue, regulatory oversight of the FHLBanks’ policies and practices for such collateral, from a mission standpoint, has been limited. For example, FHFA examiners have not been routinely directed to assess the FHLBanks’ analytical basis for the haircuts on alternative collateral, although they are directed to do so for traditional forms of collateral. In the absence of regulatory oversight, the FHLBanks have exercised wide discretion in establishing policies and practices pertaining to alternative collateral over the years. Although the FHLBanks may view these policies and practices as necessary to protect their financial soundness, our review also indicates that many FHLBanks have not substantiated through documentation the analytical basis for such policies, including establishing the haircut levels. Available FHLBank documentation suggests that some haircuts applied to alternative collateral may need to be lowered and others raised. Moreover, a majority of the FHLBanks have not established quantitative performance goals for products, related to agricultural and small business lending in their strategic business plans, which could include alternative collateral, as required by agency regulations. Additionally, the FHLBanks are not required to identify and address agricultural and small business financing needs in their communities, including potential uses for alternative collateral, through a process of market analysis and consultations with stakeholders as they are required to do by FHFA regulations for their Targeted Community Development Plans, which agency officials said largely pertain to the AHP and CIP programs. FHFA officials said they have not focused oversight efforts on alternative collateral policies and practices because its minimal use within the FHLBank System does not represent a safety and soundness concern. But, without more proactive oversight by FHFA from a mission standpoint, the appropriateness of FHLBank alternative collateral policies may not be clear. While FHFA examination guidance does not require reviews of FHLBank alternative collateral policies and practices, it does include procedures related to general collateral management policies and practices. For example, examiners are expected to assess whether each FHLBank has addressed appropriate levels of collateralization, including valuation and collateral haircuts. According to FHFA officials, at every examination an examiner will review documentation of the FHLBanks’ general collateral valuation and haircut analyses, and any available underlying financial models. Our review of FHFB and FHFA examinations of each of the 12 FHLBanks over the past three examination cycles confirmed that examiners did not address the FHLBanks’ alternative collateral management policies and practices. However, consistent with the examination guidance, the examinations did include analysis of the FHLBanks’ general collateral policies and practices. For example, FHFA examiners noted that one FHLBank did not regularly review its collateral haircuts and that the current haircuts had not been validated by well-documented analyses. Examiners also found that another FHLBank disregarded the results of a collateral valuation model to establish haircuts for certain members without sufficient analysis to support the decision. FHFA officials we contacted said that due to mounting concerns about the FHLBanks’ safety and soundness in 2009, the agency conducted a focused review of the FHLBanks’ collateral management practices, including valuation and haircut methodologies. They also noted that they have been monitoring the FHLBanks’ progress in responding to examiners’ recommendations to improve documentation of their general collateral haircut policies. FHFA guidance also includes procedures for assessing the FHLBanks’ compliance with other mission-related programs. Specifically, the examination guidance includes procedures for assessing the FHLBanks’ implementation of the AHP and the CIP programs. As established by FHFA guidance, examiners should assess the effectiveness of these programs at each FHLBank and whether program operations were consistent with the laws and policies that govern them. For example, the examination guidance indicates that examiners should evaluate the reasonableness of fees associated with these programs, whether the FHLBank has met its established community lending goals, and the extent to which the projects funded by the programs benefited eligible targeted businesses or households. Our review found that the examinations generally included sections that assessed the FHLBanks’ implementation of AHP and CIP programs. FHFA officials cited several reasons why the agency’s examination procedures and practices did not specifically address alternative collateral. First, they said that the use of alternative collateral was minimal and did not represent a significant safety and soundness concern. Because single-family mortgages, investment-grade securities, and commercial real estate loans represent the vast majority of member assets that are pledged to secure advances, FHFA officials said that they have focused their examination resources on them. They wanted to ensure that the FHLBanks have established adequate policies and procedures for managing such collateral, including the analytical basis for the haircuts applied to it, and the mitigation of potential losses. Furthermore, FHFA officials said important differences between alternative collateral and other mission-related programs, such as the AHP program, explained the differences in the agency’s oversight approaches. They said that FIRREA establishes specific requirements for how the AHP program is to be funded and implemented. For example, the statute establishes the level of annual contribution from each FHLBank to fund their AHP programs as well as minimum requirements for the FHLBanks’ mandated AHP implementation plans. In contrast, FHFA officials said GLBA only authorizes the FHLBanks to accept alternative collateral to secure advances, and does not establish specific requirements for operating the program that could be assessed through examinations. While FHFA may prioritize FHLBank safety and soundness concerns and the structure of the AHP and CIP programs may facilitate their oversight from a mission standpoint, FHFA’s, as well as FHFB’s, limited oversight of alternative collateral may have limited its appeal within the FHLBank System. By not implementing examination procedures that are consistent with its general approaches to monitoring FHLBank collateral practices, FHFA has provided the FHLBanks with wide discretion in adopting policies and practices for alternative collateral. Although the FHLBanks may have adopted policies that they believe are necessary to protect their financial interests while complying with their missions, our work indicates that the analytical bases for these generally have not been fully documented. Although federal internal control standards established that key decisions need to be documented, one of the two FHLBanks that has not accepted alternative collateral provided a documented basis for its policy. Further, of the 10 FHLBanks that accept alternative collateral, 3 provided documentation of the basis of the haircuts that they applied to such collateral. Analysis from 2 FHLBanks suggested that their haircuts for all types of alternative collateral were too high and 1 FHLBank subsequently revised the haircuts downward by an average of about 11 percentage points. The other FHLBank’s analysis suggested that the haircuts it applied to agricultural collateral might be too high, while the haircuts for small business collateral might be too low. Of the 7 FHLBanks that did not provide any documentation of their alternative collateral haircuts, officials from 3 said they have not documented such analysis and the other 4 did not respond to our requests. An official from 1 of the FHLBanks that has not established documentation for its alternative haircuts said they had been set at a level to be “conservative.” As discussed previously, FHFA guidance directs examiners to assess the analytical basis for the haircuts applied to other forms of collateral. We also analyzed FDIC data on the estimated losses on various loan categories in banks that failed or were on the verge of failure between January 2009 and February 2010, which raises some questions and reinforces the need for further analysis of the risks associated with alternative collateral. Prior to a bank’s failure, FDIC contractors conduct on-site reviews to assess the value (defined as the estimated market value of the loans as a percentage of the outstanding balances) of the assets held by the bank to calculate how much the failure will cost the Deposit Insurance Fund. According to FDIC officials, estimates are made on the value of the loans of such banks, including single-family residential loans, residential and nonresidential construction loans, consumer loans, business loans, and agricultural loans. According to the FDIC data, the estimated value of agricultural loans was higher than the value of any other type of loan reviewed. Our discussions with several agricultural CFIs and reviews of some regulatory and independent reports also suggest that the U.S. agricultural economy has performed somewhat better than the broader economy in recent years, which may explain why such collateral recently may have outperformed other types of loans as suggested by FDIC data. Furthermore, while the commercial and industrial loans category (which includes small business loans) had a lower estimated value than the agricultural, consumer, and single-family mortgage loan categories, according to the FDIC’s asset valuation estimates, it had a higher estimated value than the nonresidential and residential construction loan categories. However, important limitations apply to any analysis of these FDIC data. First, because the data only cover recent bank failures or near failures, they do not provide a historical basis to assess the relative risk of the various loan types. Many banks have failed recently primarily due to substantial losses on residential mortgages and commercial real estate loans, of which each has experienced significant price declines. Second, the analysis does not control for any other factors that may be related to FDIC’s asset valuation estimates, such as the characteristics of the loans made by the banks. Nevertheless, these data raise questions about the FHLBanks’ analytical basis for the haircuts that are currently applied to alternative collateral as well as the need for FHFA to routinely review the basis for these policies from a mission standpoint to help ensure that they are not unduly limiting the use of such collateral to secure advances. In 2000, FHFB issued regulations, which remain in effect today, that require each FHLBank’s board of directors to adopt a strategic business plan that describes how the business activities of the FHLBank will achieve its mission. As part of the strategic business plan, FHLBanks must establish quantitative performance goals for products related to multifamily housing, small business, small farm, and small agribusiness lending. Such products could include advances to CFIs that are secured by alternative collateral. As part of its mission oversight responsibilities, FHFA is responsible for ensuring that the FHLBanks comply with these annual goal requirements in establishing their plans. Our review indicates that the strategic business plans of five FHLBanks do not include such goals. While the remaining seven FHLBanks have established goals for alternative collateral lending, three have set goals at zero. The four FHLBank strategic business plans that include the required goals establish annual benchmarks for the number or dollar amount of advances made to members for the purpose of lending to small businesses, small farms, and small agribusinesses. FHFA officials said that lending goals for alternative collateral are not part of its planned review of FHLBanks’ 2010 strategic business plans. In the absence of vigorous oversight and enforcement by FHFA, many FHLBanks may continue to place a low priority on requirements that they establish quantitative annual goals for products related to agricultural and small business lending, which could include advances secured by alternative collateral. According to FHFA officials, the regulation pertaining to the AHP and CIP programs requires the FHLBanks to develop annual Targeted Community Lending Plans to address identified credit needs and market opportunities for targeted community lending in their districts. To develop these plans, FHLBanks are to consult with members, economic development organizations, and others, and establish quantitative community lending goals. FHLBanks also must conduct market research to ascertain their district’s economic development needs and opportunities. The regulator then uses the Targeted Community Lending Plans to help determine the extent to which FHLBanks have achieved their mission to provide community and economic development opportunities in their districts. Although FHFA’s regulation that requires the establishment of Targeted Community Lending Plans may provide a means for the FHLBanks to identify lending and economic development needs within their communities and respond accordingly, it does not specifically require the FHLBanks to analyze small business and agricultural lending needs or opportunities for the use of alternative collateral. According to FHFA officials, this is because the regulation pertains specifically to the AHP and CIP programs. Given that FHFA does not require the FHLBanks to include an assessment of opportunities to use alternative collateral to support small business and agricultural lending in their Targeted Community Lending Plans, such plans generally have not addressed such issues. One FHLBank’s Targeted Community Lending Plan—that of the FHLBank of Indianapolis—did discuss issues pertaining to agricultural and small business lending. Specifically, the plan for 2010 stated that the Bank intends to increase its small business, small farm, or small agribusiness lending by 5 percent in the next year. While FHFA officials told us that the regulation that requires the FHLBanks to develop Targeted Community Lending Plans does not pertain to alternative collateral, we note that the general process involved in creating the plans is potentially beneficial in that it calls on the FHLBanks to review relevant information and consult with stakeholders in their communities to identify and address relevant lending needs. A similar process—through revisions to FHFA’s regulations pertaining to Targeted Community Lending Plans, or strategic business plans, or other measures as appropriate—that would require the FHLBanks to assess agricultural and small business lending needs as well as opportunities to use alternative collateral, could better focus their attention on potential opportunities and strategies to enhance such financing. GLBA’s inclusion of new types of collateral for CFIs indicates that these types of available collateral should be taken in account when formulating strategies for the FHLBanks’ economic development efforts. However, the regulators’ limited oversight of FHLBank alternative collateral policies and practices over the years has provided the FHLBanks with wide discretion to establish risk-management policies, which although viewed as necessary to protect against potential losses may involve an off-setting trade-off. That is, they may unduly limit the appeal and use of alternative collateral. We have identified several areas of concern. In many cases the FHLBanks have not substantiated and documented their reasons for not accepting alternative collateral or applying relatively high haircuts to it. Available FHLBank documentation suggests that some alternative collateral haircuts may be too high; limited FDIC asset valuation estimates indicate that the risks associated with alternative collateral can vary over time; and 15 of the 30 CFI representatives we interviewed expressed concerns about haircuts applied to such collateral and other risk-management practices, some of whom said such policies and practices limited their willingness to use alternative collateral. In addition, because FHFA has not leveraged its existing examination procedures to include an assessment of the FHLBanks’ alternative collateral policies, the appropriateness of such policies may not be clear. Furthermore, FHFA has not ensured that all FHLBanks establish quantitative goals for products related to agricultural and small business lending, which could include alternative collateral, in their strategic business plans as required by the agency’s regulations. Finally, FHFA has not taken steps, such as revising its regulations pertaining to Targeted Community Development Plans, or strategic business plans, or other measures as may be appropriate, to follow a process whereby they conduct market analysis and consult with a range of stakeholders in their communities to identify and address agricultural and small business financing needs, including the use of alternative collateral. We recognize that FHFA has critical responsibilities to help ensure that the FHLBanks’ operate in a safe and sound manner, and has not focused on alternative collateral because it was not deemed a risk to safety and soundness. Nevertheless, the agency also has an obligation to take reasonable steps to help ensure that the FHLBank System is achieving the missions for which it was established, including economic development through the use of alternative collateral. We recommend that the Acting Director of FHFA take the following actions to help ensure that the FHLBanks’ economic development mission- related activities include the appropriate use of alternative collateral, as provided for in GLBA. Revise FHFA examination guidance to include requirements that its examiners periodically assess the FHLBanks’ alternative collateral policies and practices, similar to the manner in which other forms of collateral, such as single-family mortgages, are assessed. Specifically, FHFA should revise its guidance to ensure that examiners periodically assess the FHLBanks’ analytical basis for either (1) not accepting alternative collateral, or (2) establishing their haircuts and other risk-management policies for such collateral. Enforce regulatory requirements that the FHLBanks’ strategic business plans include quantitative performance goals for products related to agricultural and small business financing, including the use of alternative collateral as appropriate. Consider requiring the FHLBanks, through a process of market analysis and consultations with stakeholders, to periodically identify and address agricultural and small business financing needs in their communities, including the use of alternative collateral. Such requirements could be established through revisions to FHFA’s regulations for Targeted Community Development Plans or strategic business plans or through other measures as deemed appropriate. We provided a draft of this report to FHFA for its review and comment. We received written comments from FHFA’s Acting Director, which are reprinted in appendix II. In its comments, FHFA expressed certain reservations about the analysis in the draft as discussed below, but agreed to implement our recommendations. Specifically, FHFA stated that the agency would (1) review each FHLBank’s policies and practices, starting with the 2011 annual supervisory examination cycle, to assure that they can substantiate their collateral practices and are meeting their CFI members’ liquidity needs; (2) issue an Advisory Bulletin to the FHLBanks that provides supervisory guidance on how to include goals for alternative collateral in the preparation of FHLBank strategic business plans beginning in 2011, and review those plans to ensure they include such goals; and (3) direct the FHLBanks to document their outreach and alternative collateral needs assessment efforts in their strategic business plans, and instruct examiners to monitor the FHLBanks’ efforts in these areas as part of the agency’s ongoing supervisory review. FHFA also provided technical comments, which we incorporated as appropriate. In commenting on the draft report, FHFA said that it has no evidence that any CFI member is collaterally constrained and unable to access advances as a result of the FHLBanks’ collateral risk management practices. FHFA also said that it has no evidence that any issues discussed in the draft report have resulted in or contributed to a lack of liquidity for small farm, agriculture, and small business lending. In addition, FHFA noted that in many cases, CFI members obtain sufficient liquidity by pledging real estate-related collateral and, therefore, CFI members’ ability to obtain an advance is not limited by the type of collateral they have. While our draft report noted that most CFIs may not be collaterally constrained, we identified nearly 800 CFIs, constituting about 13 percent of all CFIs, that may face challenges in obtaining an advance using traditional collateral because they have substantial amounts of small business and agricultural collateral on their books. Further, we interviewed a nongeneralizable sample of 30 of these CFIs and found that half of them expressed concerns with FHLBank haircuts and other policies related to alternative collateral. Several CFIs said that the haircuts applied to alternative collateral were a factor in their decision not to pledge alternative collateral to secure an advance. In agreeing to implement the recommendations, FHFA will have the information necessary to help assess the extent to which CFIs may face challenges in obtaining financing as well as the appropriateness of FHLBank alternative collateral policies and practices. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to other interested congressional committees and to the Acting Director of the Federal Housing Finance Agency. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The objectives of this report were to (1) discuss factors that may limit the use of alternative collateral to secure Federal Home Loan Bank (FHLBank) advances; and (2) assess selected aspects of the Federal Housing Finance Agency’s (FHFA) oversight of the FHLBanks’ alternative collateral policies and practices. To address the first objective, we reviewed relevant sections of the Gramm-Leach-Bliley Act of 1999, the Housing and Economic Recovery Act of 2008, and FHLBank collateral policies and procedures, particularly those pertaining to alternative collateral. While we were able to review each FHLBank’s collateral policies and procedures, the confidentiality of such information limited what we could publicly disclose in our report. Specifically, because the collateral haircut policies of some of the FHLBanks generally are considered to be proprietary, we were unable to specify the policies of individual FHLBanks. Where appropriate, we used an alphabetic system when discussing FHLBank collateral policies and limited discussion of details to ensure the protection of the FHLBanks’ identities. We also conducted interviews with representatives from FHFA, the regulator of the FHLBank System; the 12 FHLBanks; the Council of Federal Home Loan Banks; the Independent Community Bankers of America; and obtained information from a nongeneralizable, random sample of 30 Community Financial Institutions (CFI). To develop the nongeneralizable, stratified random sample of 30 CFIs, we first identified the population of CFIs that may have limited sources of traditional collateral to secure FHLBank advances. To identify CFIs that may have relatively large volumes of agriculturally related loans on their books, we used the Federal Deposit Insurance Corporation’s (FDIC) definition of an agricultural bank; that is, a bank having 25 percent or more of its loans associated with agricultural lending. FHFA provided a list of 6,281 CFI members as of September 30, 2009—of which 470 met FDIC’s definition of an agricultural bank, meaning that they held at least 25 percent of their assets in agricultural loans. (We note that the report includes updated data on agricultural banks as of year-end 2009.) To identify CFIs that may have relatively large volumes of small business loans on their books, we used information from the Small Business Administration’s (SBA) Office of Advocacy. Specifically, because there is no similar threshold to define a small business lender, we used the SBA’s Office of Advocacy’s determination of the top 10 percent of small business lenders in each state to determine the small business sample population. We then matched and merged this list of institutions, by institution name, with FHFA’s list of CFI members. The resulting list included 326 small business CFIs and their total assets for each FHLBank district. The final sample population of agricultural and small business CFIs totaled 796. From this final sample population, we identified 10 lenders that met the definition of both an agricultural and small business CFI. We sampled this dual-status CFI population separately because of its potential to provide a unique perspective on alternative collateral in the FHLBank System. Our sample was stratified to ensure that it included the perspective of CFIs located in FHLBank districts that had (1) high, some, or no use of alternative collateral, as of year-end 2008; and, (2) banks that are very small, meaning less than $100 million in total assets. We defined an FHLBank as having had a “high” acceptance of alternative collateral if it accepted more than $500 million in alternative collateral in 2008; “some” acceptance if it accepted from $1 to $500 million in alternative collateral in 2008; and “no” acceptance if it accepted no alternative collateral ($0) in 2008. We then over sampled within each stratum to accommodate refusals to participate and randomly selected a nongeneralizable sample of 30 CFIs (see table 6). To obtain information from the CFIs in our sample, we used a Web-based protocol to conduct structured telephone interviews. The majority of responses, 29, were obtained by telephone; and 1 was obtained by e-mail. We used data from FHFA on the use of alternative collateral throughout the FHLBank System and information from our interviews with the 12 FHLBank representatives to develop our structured interview. We pretested the structured interview protocol and made revisions as necessary. Questions from the structured interview focused on the background and local economies of the CFIs, their use of products and services from their local FHLBank, and their views of and experience with pledging alternative collateral to obtain an advance from an FHLBank. The views expressed by representatives of the CFIs in our sample cannot be generalized to the entire population of all CFIs. To present details and illustrative examples regarding the information obtained from the CFI interviews, we analyzed the narrative (open-ended) and closed-ended responses and developed summaries. These summaries were then independently reviewed to ensure that original statements were accurately characterized. To assess the FHFA and FDIC data used in our analyses, we interviewed agency officials knowledgeable about the data. In addition, we assessed FHFA, FDIC, and SBA’s Office of Advocacy data for obvious outliers and missing information. To assess the accuracy of the SBA’s Office of Advocacy and FHFA data, we compared a sample of it against public information from the Federal Financial Institutions Examination Council’s Uniform Bank Performance Report, which is an analytical tool created for bank supervisory, examination, and management purposes and can be used to understand a bank’s financial condition. We determined that the data were sufficiently reliable for the purpose of this engagement. For the second objective, we reviewed FHFA’s examination policies and procedures and federal internal control standards, as well as a total of 23 FHFA and Federal Housing Finance Board (the FHFA predecessor) examinations covering each of the 12 FHLBanks over the past three examination cycles. We reviewed FHFA’s regulation pertaining to the development of strategic business plans and we reviewed 11 FHLBanks’ plans for 2010; and 1 plan submitted for 2009 because it was the most recently available for that FHLBank. Additionally, we reviewed FHFA’s regulation pertaining to the development of Targeted Community Lending Plans and we reviewed each of the 12 FHLBanks’ plans for 2010. We also discussed FHFA’s oversight program for alternative collateral with senior agency officials. Finally, we conducted limited analysis to gain a perspective on the level of FHLBank haircuts applied to alternative collateral. To do so, we obtained and reviewed documentation of analyses from 3 FHLBanks; the other 9 FHLBanks generally did not provide such documentation. Confidentiality considerations limited the amount of information we could disclose about the analyses from the 3 FHLBanks that provided documentation. We also obtained and analyzed data from FDIC on the estimated losses from banks that failed or were on the verge of failure, by various loan types, for the period January 2009 through February 2010. These data were obtained through asset specialists who were contracted by FDIC to review the asset portfolios of failed institutions and to develop anticipated loss rates, expressed as a percentage of outstanding loan balances, on the various categories of the banks’ asset portfolios. As discussed in this report, this approach has several important limitations, including not providing a historical basis for estimating the risks associated with alternative collateral over time or controlling for any other factors that may be related to the characteristics of the loans made by the banks. To assess the reliability of the FDIC data, we interviewed agency officials knowledgeable about the data. In addition, these data are corroborated by information from our CFI interviews and several independent reports which suggest that the agricultural sector has performed somewhat better than the broader economy in recent years. We determined that the data were sufficiently reliable for the purpose of this engagement, which was to understand the FHLBanks collateral haircuts relative to the recent performance of alternative collateral assets in the financial markets. We conducted this performance audit from October 2009 to July 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Wesley M. Phillips, Assistant Director; Benjamin Bolitzer; Tiffani Humble; Ronald Ito; Fred Jimenez; Grant Mallie; Timothy Mooney; Linda Rego; Barbara Roesmann; Jerome Sandau; and Rebecca Shea made key contributions to this report.
The Federal Home Loan Bank System is a government-sponsored enterprise comprising 12 regionally-based Federal Home Loan Banks (FHLBank), the primary mission of which is to support housing finance and community and economic development. Each FHLBank makes loans (advances) to member financial institutions in its district, such as banks, which traditionally are secured by single-family mortgages. In 1999, the Gramm-Leach-Bliley Act (GLBA) authorized FHLBanks to accept alternative forms of collateral, such as agricultural and small business loans, from small members. GAO was asked to assess (1) factors that may limit the use of alternative collateral; and (2) selected aspects of the Federal Housing Finance Agency's, (FHFA) related regulatory oversight practices. GAO reviewed FHLBank policies and FHFA documentation; and interviewed FHLBank and FHFA officials, and a nongeneralizable random sample of 30 small lenders likely to have significant levels of agricultural or small business loans in their portfolios. FHLBank and FHFA officials cited several factors to help explain why alternative collateral represents about 1 percent of all collateral that is used to secure advances. These factors include a potential lack of interest by small lenders in pledging such collateral to secure advances or the view that many such lenders have sufficient levels of single-family mortgage collateral. Officials from two FHLBanks said their institutions do not accept alternative collateral at all, at least in part for these reasons. Further, FHLBank officials said alternative collateral can be more difficult to evaluate than single-family mortgages and, therefore, may present greater financial risks. To mitigate these risks, the 10 FHLBanks that accept alternative collateral generally apply higher discounts, or haircuts, to it than any other form of collateral, which may limit its use. For example, an FHLBank with a haircut of 80 percent on alternative collateral generally would allow a member to obtain an advance worth 20 percent of the collateral's value. While GAO's interviews with 30 small lenders likely to have significant alternative collateral on their books found that they generally valued their relationships with their local FHLBanks, officials from half said the large haircuts on alternative collateral or other policies limited the collateral's appeal. FHFA's oversight of FHLBank alternative collateral policies and practices has been limited. For example, FHFA guidance does not direct its examiners to assess the FHLBanks' alternative collateral policies. As a result, the FHLBanks have wide discretion to either not accept alternative collateral or apply relatively large haircuts to it. While the FHLBanks may view these policies as necessary to mitigate potential risks, 9 of the 12 FHLBanks did not provide documentation to GAO to substantiate such policies. Further, the documentation provided by three FHLBanks suggests that, in some cases, haircuts applied to alternative collateral may be too large. Also, the majority of the FHLBanks have not developed quantitative goals for products related to agricultural and small business lending, such as alternative collateral, as required by FHFA regulations. FHFA officials said that alternative collateral has not been a focus of the agency's oversight efforts because it does not represent a significant safety and soundness concern. However, in the absence of more proactive FHFA oversight from a mission standpoint, the appropriateness of FHLBank alternative collateral policies is not clear. FHFA should revise its examination guidelines to include periodic analysis of alternative collateral, and enforce its regulation pertaining to quantitative goals for products related to agricultural and small business lending. FHFA agreed with these recommendations.
DOE is the steward of a nationwide complex of facilities created during World War II to research, produce, and test nuclear weapons. Now that the United States is reducing its nuclear arsenal, DOE has shifted its focus towards cleaning up the enormous quantities of radioactive and hazardous waste resulting from weapons production. This waste totals almost 30 million cubic meters—enough to cover a football field 4 miles deep. DOE expects that environmental restoration will continue until 2070 before all of its problems have been addressed. Remediation activities at DOE’s facilities are governed by the Comprehensive Environmental Response, Compensation, and Liabilities Act (CERCLA) of 1980, as amended, and the Resource Conservation and Recovery Act (RCRA) of 1976, as amended. These laws lay out requirements for identifying waste sites, studying the extent of their contamination and identifying possible remedies, and involving the public in making decisions about the sites. At each facility we visited, DOE has signed an interagency agreement with the Environmental Protection Agency (EPA) and state regulators laying out the facility’s schedule for meeting the requirements of CERCLA and other environmental laws. CERCLA offers three methods for determining how a waste site will be remediated: the full CERCLA process, interim remedial measures, and removal actions. For each of these methods, table 1 shows the key documents and related activities required before remediation can begin. Generally, DOE’s guidance recommends that EPA and state regulators be involved at each of these steps. In addition, other documents not requiring regulatory approval frequently may supplement the documents shown. For example, for the full CERCLA process, DOE often issues reports for each phase of the remedial investigation for a group of waste sites, and before embarking on a remedial design, DOE generally prepares a remedial design work plan. Removal actions are the most abbreviated of the three planning processes. A removal action can be used to plan for remediating a waste site to the point that no further action is needed, or it can serve as a stopgap measure for a waste site that presents an urgent threat to the public or the environment. At some point after the remediation is concluded, a removal action, like an interim remedial measure, requires a record of decision to certify that the site is clean and no further action is required. Because removal actions generally require much less characterization and planning than other approaches except in emergency situations, they are most effective at sites where the contaminants and the probable remedy are relatively well known. Although removal actions in the private sector are limited to projects costing $2 million or less and taking 12 months or less, these limits do not apply at federal facilities. Available data indicate that removal actions save time and money compared with other planning approaches. Furthermore, removal actions have been used across a wide variety of environmental restoration projects, including the same kinds of projects that have been planned using the other approaches. Removal actions may also provide other benefits, such as reducing continued risks to the environment by moving projects more quickly to actual cleanups. Through January 1996, the five facilities we reviewed had a total of 39 removal actions either completed or under way. Three facilities (INEL, Hanford, and Rocky Flats) provided data allowing some comparisons of the relative time and cost involved in removal actions and other types of planning efforts. As figure 1 shows, at all three facilities the average time needed for planning was considerably shorter under removal actions than under the other approaches. At INEL, for example, planning for cleanups under removal actions averaged 4.4 months, compared with 15.2 months under interim remedial measures and 25.6 months under the full CERCLA process. Cost comparisons show the same pattern. As figure 2 shows, at INEL and Rocky Flats, under removal actions, the cost for characterization and studies before cleanup averaged $140,000, compared with almost $2 million under either interim remedial measures or the full CERCLA process. More limited data for Hanford support the same conclusion. The last five removal actions cost an average of about $790,000 for cleanup planning. These sites are now clean, or remediation is under way. In contrast, for the 18 areas along the Columbia River where Hanford plans to use interim remedial measures to manage the cleanup, the cost of preparation averaged $4.4 million per area between October 1991 and September 1995. Remediation has not begun at any of these areas. When examined on a project-by-project basis, planning for removal actions also appears to be cheaper and faster than planning at comparable sites for the other environmental restoration processes. Many of DOE’s waste sites fit into one of three categories: burial grounds, contaminated soil, or contaminated water. Burial grounds may contain radioactive and/or hazardous solid and/or liquid waste. Buried in them are such things as barrels of chemicals and other material and equipment from DOE facilities. (See fig. 3.) Soil may have been contaminated by leaks or spills or by using liquid waste disposal facilities, such as trenches and waste ponds, to disperse contaminated liquids. (See fig. 4.) Surface water or groundwater may have been contaminated by radioactive or hazardous materials leaching through the soil from spills and leaks or through normal operations. (See fig. 5.) At Hanford and INEL, where more complete comparative information was available, we analyzed the removal actions that fell into these categories. We found four instances in which removal actions had been used at sites where conditions were reasonably comparable to those at sites that had been addressed under interim remedial measures (see table 2). In each case, planning for the remediation was accomplished much more quickly and at substantially less cost using a removal action. While the projects being compared are not identical, their similarities provide a reasonable basis for comparing the relative time and cost required to complete the planning that precedes remediation. The relative speed of removal actions can provide other advantages to DOE. Because removal actions progress to actual cleanup more quickly than other CERCLA processes, removal actions can provide information about waste sites that is useful in focusing other types of remediation. For example, one removal action at Hanford involved cleaning up a liquid disposal area near one of the shutdown reactors. The project manager for the removal action said important information obtained during the removal action on the extent and spread of contamination through the soil will be used to plan and conduct cleanups near other shutdown reactors, saving both time and money. Removal actions may also reduce the cumulative risk to human health and the environment. For example, Hanford’s removal action in a trench near the Columbia River reduced the concentration of uranium in the groundwater from up to 28 times the drinking water standard to below the drinking water standard. Without the removal action, uranium would have continued to leach into the groundwater for at least 3 years before a planned water treatment facility was completed. At Oak Ridge, the EPA region 4 administrator praised a recent removal action that successfully reduced radioactive strontium releases by about 40 percent. He noted that the projects were completed in less time, at less cost, and with equal or greater effectiveness than the “typical” decision-making process would have allowed. He also attributed the results to teamwork and cooperation between DOE and the regulators. Finally, removal actions may allow DOE to “pull in its fences” by cleaning up isolated waste sites on the outskirts of a facility and thereby reduce the number of acres requiring DOE’s control. For example, two removal actions addressing waste sites on remote portions of the Hanford reservation allowed DOE to complete the remediation of 27 percent, or 153 square miles, of Hanford’s total land area. In February 1996, a record of decision was issued requiring no further cleanup for these areas. Although DOE’s guidance calls for using removal actions where appropriate, the use of these actions varies widely by facility—from greater use at two locations, to increasing use at one location, to very limited use at the remaining two locations. While many contaminated waste sites are similar in type to those already remediated through removal actions, DOE officials have given several reasons for not using removal actions more often. They have noted, for example, that the interagency agreements and contracts governing DOE’s environmental restoration do not encourage the use of removal actions, and they expressed a preference for using removal actions only in urgent situations. Not all waste sites may best be addressed through removal actions; however, there are still additional opportunities to accelerate the progress of DOE’s environmental restoration through wider use of this approach. In August 1994, DOE and EPA adopted a policy encouraging the use of streamlined approaches to remediate waste sites. The policy encourages DOE managers to use removal actions, among other tools, when doing so “will achieve results comparable to a remedial action, but which may be completed in less time.” The policy recommends that managers give strong consideration to using removal actions in nonemergency situations. DOE issued further guidance to its facilities in November 1995, reiterating that removal actions and other accelerated approaches should be based on consensus between DOE and its regulators. At the five facilities we reviewed, the response to DOE’s policy has varied. Three facilities are adjusting their environmental restoration strategies to make greater use of removal actions, while the other two continue to plan only a limited role for the approach. Both Rocky Flats and INEL are planning to use removal actions to address significant portions of their waste sites. A Rocky Flats manager responsible for cleanup estimates that 27 waste sites will require remediation, and she plans to use removal actions for about half of them. She said using removal actions will be important to accomplishing remediation milestones because DOE officials at Rocky Flats proposed a new interagency agreement requiring several waste sites to be remediated each year. These specific remediation goals were also reflected in DOE’s contract with the contractor responsible for the remediation at Rocky Flats. For example, in fiscal year 1996 the contractor is required to clean up three high-priority waste sites at the plant. The contractor’s manager responsible for environmental restoration said that without using removal actions, these goals would be difficult or impossible to achieve. The state regulator for Rocky Flats added that removal actions will permit DOE to do more with fewer resources. DOE and regulatory officials said that the old interagency agreement focused almost exclusively on completing milestones required under the full CERCLA planning process. As a result, they said, the old agreement made it difficult to use removal actions. At INEL, DOE officials have the flexibility under their agreement to use removal actions where appropriate. Since 1993, INEL has reallocated funds and has conducted nine removal actions, including remediating contaminated soil at several sites. INEL has three other removal actions planned, including removing almost 300,000 cubic yards of contaminated soil, recovering ammunition and other ordnance scattered over several square miles, and removing 11 underground storage tanks of up to 50,000 gallons each. DOE’s Director for Environmental Restoration at INEL said the facility uses removal actions to maximize the cleanup that can be achieved with available funds. However, she noted that at some point the results of the removal action still need to be evaluated under the CERCLA process to ensure that no further action is required. Managers from Idaho’s Department of Health and Welfare who oversee environmental restoration at INEL said they consider removal actions to be effective and to save both time and money. They said that if DOE asked to use removal actions instead of other more extensive CERCLA planning processes, they would consider removal actions an acceptable alternative. While Oak Ridge has not relied extensively on removal actions in the past, officials at the facility now expect to use removal actions more frequently. Between fiscal years 1991 and 1995, Oak Ridge conducted seven removal actions. However, Oak Ridge has four removal actions planned for fiscal year 1996 and has compiled a list of 10 candidate removal actions to be carried out in the next 2 fiscal years. DOE officials believe that removal actions should be used when they can be done quickly and cost-effectively. Compared to the other three facilities, Hanford and Savannah River plan to rely less on the use of removal actions. At Hanford, officials previously pursued removal actions actively, but they are no longer doing so. In 1991, Hanford issued a cleanup strategy (called the Past Practice Strategy) proposing that all waste sites be considered as potential candidates for the removal action approach. Hanford had a contractor group dedicated to selecting, planning, and conducting removal actions. This group identified about 25 projects as candidates for removal actions. Seven actions were initiated before the group was dissolved in 1993 as part of a reorganization of responsibilities. Since then, although the Past Practice Strategy encouraging the use of removal actions has remained in effect, Hanford has initiated only one removal action. DOE, EPA, and state regulators have agreed to pursue interim remedial measures as the primary CERCLA planning process at the installation. Likewise, Savannah River has made only limited use of removal actions. Since fiscal year 1991, Savannah River has performed seven removal actions. None of these actions has been intended to serve as the final remediation for the waste site. Savannah River staff plan three additional removal actions for fiscal year 1996, but these projects, much like the removal actions carried out in the past, are stopgap measures, designed to control vegetation on three waste sites, and are not intended to be final actions. Of the more than 3,000 waste sites located at the five facilities, many are similar to those that have been addressed through removal actions. The 39 removal actions we studied addressed 4 burial grounds, 5 cases of groundwater or surface water contamination, and 21 instances of soil contamination. While many untreated sites may require no cleanup, hundreds will require further action. Many involve liquid waste disposal facilities, burial grounds, contaminated soil, and contaminated groundwater—conditions similar to those at waste sites that DOE has addressed through removal actions. For example, of the 498 identified waste sites along the Columbia River at Hanford, 54 are burial grounds and 108 are liquid waste disposal facilities. Our analysis and discussions with DOE and regulatory officials at the facilities we visited suggest that six factors limit the wider use of removal actions. Removal actions are not part of the agreements with regulators or DOE contractors. Generally, interagency agreements have not included removal actions. Instead, these agreements have often incorporated the steps included in lengthier CERCLA planning processes. The extensive planning and evaluation processes characteristic of the full CERCLA and interim remedial approaches, including the preparation of work plans and various reports, were specified in each of the agreements we reviewed. For example, at Savannah River, DOE and its regulators established milestones for fiscal year 1996 calling for the submission of almost 50 documents required under CERCLA, such as remedial investigation reports and proposed plans. Like the interagency agreements, DOE’s contracts emphasize completing steps in the process rather than performing cleanup actions, and they provide few specific incentives for remediation. For example, at Savannah River the incentive goal is tied to meeting the interagency agreement milestones on time and doing the work at less cost. Similarly, at Hanford, over half of the incentive is tied to improving the contractor’s operating processes, and less than 20 percent is tied solely to performing the actual remediation. In contrast, in order to accomplish remediation more quickly, DOE and the regulators at Rocky Flats are revising their agreement to establish remediation-based instead of process-based milestones. In the interim, they have agreed to remediate two trenches in fiscal year 1996. DOE is already implementing this change with its Rocky Flats contractor. In fiscal year 1996, the contractor will remediate the two trenches and one other waste site as directed by DOE. The contractor said this results-oriented strategy will force the greater use of removal actions because none of the other planning approaches can be used to complete the work on schedule. At Oak Ridge, officials attribute their more frequent use of removal actions to a change in their interagency agreement. The agreement now requires regulators to be involved in removal actions. Oak Ridge officials believe the change has increased the regulators’ acceptance of removal actions. Perceptions about when removal actions should be used are incorrect. Some DOE and regulatory officials told us that they believe removal actions are intended for emergency situations or for planning relatively small, uncomplicated remediation projects, not for “mainstream” cleanups. For example, at Hanford, DOE conducted a time-critical action to remove buried barrels containing solvents because the barrels were leaking and threatened to contaminate the Columbia River. A deputy director of environmental restoration at Hanford said that he would consider using a removal action in the future if a waste site were continuing to release contamination that posed a significant threat to human health or the environment. However, he does not view removal actions as appropriate for Hanford’s normal cleanup operations at sites where no urgent threat exists. The view that removal actions should be limited to urgent or small, uncomplicated remediation projects is not supported by DOE’s and EPA’s guidance or by experiences at the sites we visited. As discussed above, DOE and EPA jointly issued policy in 1994 encouraging the use of removal actions in nonemergency situations as long as CERCLA’s regulations were followed. Furthermore, DOE has successfully used removal actions when an urgent threat has not existed or when large or complex problems have required attention. Preference is given to streamlining full CERCLA and interim remedial planning approaches. As a way to shorten the time before remediation can begin, officials at some sites are concentrating on shortening the steps of lengthier CERCLA planning processes. These officials estimate that the streamlining will reduce the time required in various planning steps. For example, DOE officials at Savannah River estimate that by streamlining the full CERCLA process they will be able to reduce the average time required to plan for a cleanup from 4 years to 3 years. However, planning and evaluation will still take significantly longer under streamlined CERCLA processes than under removal actions. At Oak Ridge, for example, the expedited CERCLA process laid out in the site’s interagency agreement is expected to take 6 years. In some cases, Oak Ridge officials expect to further shorten the full CERCLA process to about 3.5 years. However, under Oak Ridge’s interagency agreement, removal actions are scheduled to take only 14 months. At Savannah River, the streamlined planning process is expected to take 3 years, whereas removal actions are estimated to require only 6 to 12 months. At Hanford, DOE and its regulators have agreed to eliminate certain documents required by the interim remedial process, but they were unable to estimate how much time and money would be saved. Planning has progressed too far to benefit from the simpler removal action process. Several DOE officials at these facilities said that, for many waste sites, the investigative studies for the full CERCLA and interim remedial processes have progressed so far that there would be little benefit from switching to removal actions. For example, officials at Hanford pointed out that they expect most high-priority waste sites in the environmentally sensitive area next to the Columbia River to be ready for cleanup in 1 to 3 years, making removal actions unnecessary. We found instances, however, in which the use of removal actions has been effective even after planning for remediation under lengthier processes has been partially completed. Officials at Rocky Flats and INEL used information gathered under lengthier CERCLA processes as the basis for removal actions, thereby accomplishing these actions more quickly than they would otherwise have done. For example, INEL officials used the remedial investigation report from the full CERCLA process as the engineering evaluation for a removal action to remove radioactively contaminated soil from six waste sites. INEL officials estimate that changing to a removal action speeded the actual remediation by several years and saved $2.6 million. At Oak Ridge, the state regulator said that at some sites cleanups now under the full CERCLA process may be converted to removal actions. He said that Oak Ridge’s focus is increasingly on getting into the field. Limited planning may increase the risk that an incorrect remedy will be chosen. Frequently, contamination at DOE waste sites is not well known. Of the 39 removal actions we reviewed, 1 incurred added or unnecessary costs because the actual conditions at the site were different from the expected ones. At Hanford, DOE conducted a removal action to excavate old drums thought to contain residues of a hazardous chemical. Upon excavation, DOE found no significant contamination in the pit. Fuller characterization before excavating the site might have helped to avoid the expense of excavation. However, a state regulator at Hanford said that full characterization of the burial ground would have cost more than the excavation. A removal action may not be the final solution. A final issue that was raised at several facilities was that, in contrast to the full CERCLA process, a removal action is an interim solution that must be documented through a record of decision after the action has been completed. EPA officials said that potential problems with final decisions could be significantly reduced by encouraging public participation and close cooperation between the regulators and DOE. DOE officials at INEL also stressed the importance of securing the regulators’ agreement with the proposed removal actions, particularly at sites where little is known of the contamination and the effectiveness of the planned remedial technology is unclear. DOE officials also expressed concern that when the final decision is proposed to the public and the regulators, additional remediation could be required. Of the 39 removal actions we studied, 26 were intended to be the final solution. None of the 26 is expected to require additional remediation when the record of decision is completed, but only one record of decision covering 4 removal actions at Hanford has been completed. In addition, interim remedial measures, which are widely used by DOE, also require a record of decision after the measures have been implemented. More extensive use of removal actions would provide a means for speeding the planning process and devoting more environmental restoration dollars to actual remediation at sites. We recognize that not every waste site is appropriate for the abbreviated planning that takes place under removal actions; however, the successful use of removal actions at a variety of environmental restoration sites throughout the DOE complex indicates that additional opportunities exist to employ this cost- and time-saving approach. We recommend that the Secretary of Energy direct the managers of DOE’s facilities, working with their regulators, to reevaluate their environmental restoration strategies to ensure the maximum possible use of removal actions. Where appropriate, this action may include systematically evaluating each waste site where actual cleanup has not yet begun, including those sites where a lengthier assessment process is under way, to identify the sites where using a removal action would be feasible and cost-effective; seeking agreement to eliminate requirements in existing interagency agreements that favor lengthier review and assessment processes in exchange for a commitment to achieving significant cleanup progress through removal actions; and identifying and implementing incentives for DOE’s contractors that would increase the emphasis on, and the reward for, pursuing removal actions where appropriate. We provided a draft of this report to DOE and EPA for their review and comment. We discussed the report with officials from DOE’s Office of Environmental Restoration, including the Director of the Office of Program Integration, and with officials from EPA’s Federal Facilities Enforcement Office, including the Senior Enforcement Counsel. Overall, the officials agreed that the report was accurate. Both agencies provided some technical comments that we have incorporated in the report. DOE agreed with our conclusion that removal actions can be completed in less time and are less costly than other approaches. However, DOE said that the report implies that DOE has more discretion to initiate removal actions than the Department believes that it has. DOE said that the report did not give enough emphasis to the barriers, such as the requirements in interagency agreements, that the Department faces in using removal actions at more waste sites. DOE also noted that it is supporting revisions to CERCLA to increase its flexibility. We have modified our report to reflect DOE’s concerns; however, we continue to believe that DOE can do more to overcome these barriers. EPA said that it generally supports the increased use of removal actions where it and/or state regulators have had the opportunity to coordinate with DOE. EPA suggested that removal actions could be enhanced by closer cooperation between regulators and DOE through the use of teams and early efforts to include the public in decisions about using removals. EPA also suggested that DOE document the savings in time and cost from using removal actions by collecting comparative data to improve the public’s and regulators’ acceptance of removal actions. We agree that these are steps that DOE should consider. We conducted our review at Hanford in Washington State, INEL in Idaho, Oak Ridge Reservation in Tennessee, Savannah River in South Carolina, and Rocky Flats in Colorado. We selected these facilities because DOE estimates that they will account for about 94 percent of the total cost of restoring the DOE complex. To determine whether removal actions have been successful in speeding cleanups, reducing costs, and providing other benefits, we attempted at each facility to gather data on the time spent and the costs incurred to plan waste sites’ remediation using both removal actions and lengthier CERCLA processes. We reviewed projects’ files, toured various sites restored through the removal action process, analyzed official records, and reviewed various reports. At Oak Ridge, Savannah River, and Hanford, cost data were not available on all projects. At those facilities, we obtained the cost data that were readily available. We also discussed the advantages and disadvantages of removal actions with DOE and contractor officials. To identify additional opportunities for DOE to use removal actions, we compared untreated waste sites to waste sites that had been successfully treated through removal actions. We also interviewed officials at each location and reviewed lists of potential removal actions that had been prepared at some sites. To identify potential barriers to the greater use of removal actions, at each location we reviewed agreements with regulators, as well as selected contracts and incentives provided to DOE contractors. We also reviewed relevant statutes and regulations, as well as EPA’s and DOE’s guidance, and discussed the Department’s guidance with DOE’s Office of Environmental Guidance. To obtain the Department’s perspective on the role of removal actions, we discussed the approach with DOE’s Office of Environmental Restoration. We also interviewed state and EPA regulators responsible for activities at the five facilities and EPA officials from the Federal Facilities Enforcement Office. We conducted our work from July 1995 to April 1996 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days from the date of this letter. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Energy, and other interested parties. We will also make copies available to others on request. Please call me at (202) 512-3841 if you or your staff have any questions. Major contributors to this report are listed in appendix I. Chris Abraham Robert Lilly James Noel Delores Parrett Angela Sanders Bernice Steinhardt Stanley Stenersen William Swick The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Energy's (DOE) use of removal actions to reduce the cost and accelerate the pace of environmental restoration projects. GAO found that: (1) removal actions save money and time compared with other remediation planning approaches; (2) the use of removal actions may provide information that is useful for other types of remediation, reduce the cumulative risk to human health and the environment, and reduce the size of sites under DOE control; (3) the use of removal actions at DOE facilities varies; (4) the use of removal actions is limited because removal actions are not part of interagency agreements with regulators or DOE contractors; (5) some officials believe that removal actions are intended for emergency situations or for planning small remediation projects; (6) officials at some sites are concentrating on streamlining Comprehensive Environmental Response, Compensation, and Liabilities Act (CERCLA) and interim remedial planning approaches, but planning and evaluation will still take significantly longer under simpler CERCLA processes; and (7) limited planning may increase the risk that an incorrect remedy will be chosen.
The Army’s purchase card program is part of the governmentwide Commercial Purchase Card Program established to simplify federal agency acquisition processes by providing a low-cost, efficient vehicle for obtaining goods and services directly from vendors. DOD has mandated the use of the purchase card for all purchases at or below $2,500 and has authorized the use of the card to pay for specified larger purchases. DOD has had significant growth in the program since its inception and estimates that in fiscal year 2001 about 95 percent of its transactions of $2,500 or less were made by purchase card. The purpose of the program was to simplify the process of making small purchases. It accomplished this goal by allowing cardholders to make micropurchases of $2,500 or less—$25,000 or less for training—without having to execute contracts. The government purchase card can also be used for larger transactions, but they still require contracts. In these cases, the Army often refers to the card as a payment card because it pays for an acquisition made under a legally executed contract. The Army uses a combination of governmentwide, DOD, and Army guidance as the policy and procedural foundation for its purchase card program. The Army purchase card program operates under a governmentwide General Services Administration purchase card contract, as do the purchase card programs of all federal agencies. In addition, government acquisition laws and regulations, such as the Federal Acquisition Regulation, provide overall governmentwide guidance. DOD and the Army have promulgated supplements to these regulations. The Assistant Secretary of Defense for Acquisition, Technology, and Logistics, in cooperation with the Under Secretary of Defense (Comptroller), has overall responsibility for DOD’s purchase card program. The DOD Joint Purchase Card Program Management Office, in the office of the Assistant Secretary of the Army for Acquisition Logistics and Technology, is responsible for overseeing DOD’s program. The Army agency program coordinator, within the joint office, has oversight over the Army’s purchase card program. However, primary management responsibility for the purchase card program lies with the contracting offices in the major commands and local installations. Figure 1 depicts the Army purchase card program management hierarchy as it was during our audit work. For the major commands, the figure shows the number of installation program coordinators within the command. For the five installations we audited, the figure shows the number of approving officials and cardholders at each installation. On May 1, 2002, the Army created an Office of the Deputy Assistant Secretary of the Army (Procurement) and the U.S. Army Contracting Agency. The responsibility for the Army purchase card program and the DOD Purchase Card Joint Program Management Office will be moved to the newly created office. This new Deputy Assistant Secretary’s office will be in a “transitional” status until October 1, 2002. At the installation, personnel in three positions—program coordinator, cardholder, and approving official—are collectively responsible for providing reasonable assurance that purchase card transactions are appropriate and meet a valid government need. The installation program coordinator, typically a full-time position under the direction of the director of the contracting office, is responsible for the day-to-day management, administration, and oversight of the program. In our work, we noted that program coordinators develop local standard operating procedures, issue and cancel cards, train cardholders and approving officials, and coordinate with other Army units and the card-issuing bank. Cardholders—soldiers and civilian personnel—are to make purchases, maintain supporting documentation, and reconcile their monthly statements. Approving officials, who typically are responsible for more than one cardholder, are to review cardholders’ transactions and the cardholders’ reconciled statements and certify the official consolidated bill for payment. Approving officials receive an official bill that consolidates their cardholders’ purchases. Appendix II provides additional details on the Army purchase card program. Mangement and employees should establish and maintain an environment throughout the organization that sets a positive and supportive attitude toward internal control and conscientious management. A positive control environment is the foundation for all other standards. It provides discipline and structure as well as the climate which influences the quality of internal control. GAO’s Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999) Weaknesses in the internal control environment for the Army purchase card program at the five major commands and five installations we audited contributed to internal control breakdowns and potentially fraudulent, improper, and abusive purchases. The importance of the role of management in establishing a positive internal control environment cannot be overstated. GAO’s Standards for Internal Control discusses management’s key role in demonstrating and maintaining an organization’s integrity and ethical values, especially in setting and maintaining the organization’s ethical tone, providing guidance for proper behavior, and removing temptations for unethical behavior. Army purchase card management has not encouraged a strong internal control environment. It has not focused on ensuring an adequate environment for a greatly expanding program. Instead, Army purchase card management focused significant attention on maximizing the use of the purchase card for small purchases and on paying bills quickly to reduce delinquent payments, and it developed performance measures and goals for them. However, purchase card management has not focused equal attention on internal control, and it has not developed performance measures to assess the adequacy of internal control activities or set goals for them. As a result, our audit identified a weak internal control environment characterized by a lack of (1) adequate operating procedures specifying needed program management, oversight, and internal control activities and (2) oversight by all management levels over the program’s implementation at the installation level. These weaknesses are symptomatic of a purchase card infrastructure that is insufficiently robust to build and sustain a strong internal control environment. As discussed in the next section, strong internal control activities are needed to effectively manage the Army’s purchase card program and provide reasonable assurance that the billions of dollars spent under the program adhere to legal and regulatory requirements. Developing performance measures and setting performance goals are fundamental to implementing and maintaining strong internal control activities. Appropriate policies, procedures, techniques, and mechanisms exist with respect to each of the agency’s activities .... Management has identified the actions and control activities needed to address the risks and directed their implementation. GAO’s Internal Control Standards: Internal Control Management and Evaluation Tool (GAO/AIMD-01-1008G, August 2001) The Army operates its purchase card program without a specific servicewide regulation or standard operating procedures to govern purchase card activities throughout the agency. Instead, the Army relies on memorandums issued by the DOD and Army purchase card program offices and procedures issued by major commands and installations. Our assessment of the existing Army guidance is that it does not adequately identify and direct the implementation of needed actions and control activities. The memorandums issued by the DOD and Army purchase card program offices do not provide the Army purchase card program with a comprehensive set of policies and operating procedures that identify the actions and control activities needed to manage the program. Instead, they address such topics as cash management of certified purchase card invoices or suggest best practices, including discussions of the importance of internal control activities. Also, the memorandums often only request that Army commanding officers implement a suggested action; they do not direct that specific actions be taken within specific time frames. Such requests might not achieve the desired results. For example, an August 3, 2001, Office of the Assistant Secretary of the Army for Acquisition Logistics and Technology memorandum requested Army units’ assistance and support in implementing the DOD program office’s earlier request to assess the adequacy of purchase card program human capital resources. Because they are only requests, they do not have to result in action. For example, in the example above, we found no evidence that the major commands or installations had made an assessment of their overall purchase card human capital resource needs. Without agencywide operating procedures, the Army has relied on its major command and local installation program coordinators to establish purchase card policies and procedures to guide approving officials, cardholders, and others involved in the purchase card program as they implement the program. The standard operating procedures for the major commands and installations we audited varied widely, and they were not adequate. For example, the Army Materiel Command does not have standard operating procedures, but uses a Web-based tutorial that is part of required training to guide cardholders and approving officials. A training tutorial does not carry the force of a regulation or a standard operating procedure. Consequently, installation program coordinators, such as at the Soldier, Biological and Chemical Command - Natick, developed standard operating procedures that set program implementation standards and requirements at the installation. At the installation level, the contrast between three installations illustrates the differences. As discussed above, the Soldier, Biological and Chemical Command – Natick had a detailed operating procedure that was revised during our work there to add further detailed instructions. Fort Benning, in Columbus, Georgia, did not have installation-level operating procedures. At Fort Hood, in Killeen, Texas, the installation-level procedures were supplemented with detailed procedures developed by the military units, for example, battalions and brigades, located there. Thus, the procedures at these three installations differed significantly and within Fort Hood procedures were different. Collectively, the Army policy memorandums and the major command and installation-level operating procedures do not adequately address key control environment issues. Among the more important issues not adequately addressed are responsibilities and duties of installation-level program coordinators, controls over the issuance and assessment of ongoing need for cards, appropriate span of control for approving officials, and appropriate cardholder spending limits. In addition to the above control environment issues, we identified weaknesses in the individual control activities we tested, which we discuss in the next section of this report. Army guidance has not addressed the scope of responsibilities and specific duties of installation-level program coordinators, although they are the primary focal point for managing the purchase card program and generally spend all their time on the purchase card program. The importance of these program coordinators to the purchase card program cannot be overstated. During our work we noted that program coordinators develop and enforce operating procedures, establish and cancel cardholder and approving official accounts, train cardholders and approving officials, interact with the bank, and field myriad questions about the program from both cardholders and approving officials. Yet, the Army does not have guidance on how to do these activities, and it does not provide program coordinators with guidance or assistance in developing oversight activities to monitor how well their programs are functioning. Program coordinators told us that they did not get formal training in what their duties are and how they should be done. They said they had to do a lot of on-the-job learning and they called other program coordinators for advice. Little guidance exists to assist program coordinators and unit managers in selecting who should be issued a purchase card. Carefully controlling the issuance of cards and continually reassessing the need and justification for outstanding cards are important issues in controlling the government’s risk in the purchase card program. At the installations we audited, the operating procedures usually specified that unit managers, after deciding who should be a cardholder and who should be an approving official, request the installation program coordinator to process the appointments. Yet, we found little guidance at any level that provided criteria to these officials for determining how many cards a unit should have or who should have them. The November 2001 operating procedure at the Soldier, Biological and Chemical Command - Natick requires unit directors to provide written justification for the selection of a cardholder or approving official. However, without guidance from the Army, the command did not establish criteria to guide the directors’ decisions. In no case did we identify guidance that required cardholders to have a continuing need to make procurements for an office or organization, and none of the guidance discussed the need to reassess the ongoing need for outstanding cards. Standard operating procedures at the major commands and installations we audited do not adequately discuss the span of control that is appropriate for approving officials that could provide a reasonable assurance that they can effectively perform their responsibilities. The training program for Army Materiel Command and the standard procedures at the Soldier, Biological and Chemical Command – Natick discuss that an approving official should have only as many cardholders for whom he/she can review all monthly transactions. Approving officials who have more cardholders than they can effectively supervise is symptomatic of a weak control environment. The Army did not provide criteria for approving officials’ span of control until July 2001, just prior to our testimony on the purchase card program at two Navy installations. The July guidance suggested a span of control of five to seven cardholders. However, this guidance had not been promulgated in major command or installation guidance as of the end of our fieldwork. Policies and procedures that addressed controlling cardholders’ spending limits were inadequate. Unit managers and approving officials coordinate with the program coordinator to set both transaction and monthly spending limits for cardholders. However, we found no policy guidance or procedures that provided criteria to guide them in making these decisions, except a recitation of the micropurchase spending limits, until an August 13, 2001, memorandum from the Director of Defense Procurement. This memorandum, which was in response to congressional hearings on our Navy testimony, noted that not every cardholder needs to have the maximum transaction or monthly limit and that reasonable limits based on what the person needs to buy should be set. We found that individual transaction limits were generally set at the micropurchase maximum of $2,500. Installations generally set monthly limits at a generic level, such as $10,000, $25,000, or $100,000, for most of their cardholders. We saw little evidence that limits were set based on an analysis of individual cardholders’ needs or past spending patterns. In some cases, we were told that the monthly limits were based on the anticipated peak spending to avoid possible limit changes. We also saw infrequently used cards that, nevertheless, had spending limits set at the maximum. Limits that are higher than justified by the cardholder’s authorized and expected usage unnecessarily increase the government’s exposure to potentially fraudulent, improper, and abusive purchases. As we were performing our review of the Army purchase card program and in response to our July testimony on Navy purchase card activities, DOD and Army officials have issued a number of memorandums that address some of the weaknesses that we have discussed. For example, a memorandum from the Director of Defense Procurement, issued in August 2001, said that only those personnel with a continuing need to purchase goods or services as part of their jobs should be cardholders. In another example, DOD’s Joint Program Office, after we requested data on inactive cards, sent a February 2002 memorandum to agency program coordinators asking that they consider canceling cards with little activity or imposing other controls, such as reducing the monthly limit to 1 dollar. However, at the locations we audited, the guidance in these and other memorandums had not been incorporated into operating procedures as of the end of our fieldwork. DOD and Army purchase card officials told us that they recognized the need for the Army to issue standard operating procedures for the purchase card program. They said that work had been ongoing on developing such procedures, which could be issued in this fiscal year. In addition, on March 19, 2002, the Secretary of Defense directed the Under Secretary of Defense (Comptroller) to establish a Charge Card Task Force to review the operations of both purchase and travel cards and to develop recommendations to improve procedures. Agency internal control monitoring assesses the quality of performance over time. It does this by putting procedures in place to monitor internal control on an ongoing basis as a part of the process of carrying out its regular activities. It includes ensuring that managers and supervisors know their responsibilities for internal control and the need to make internal control monitoring part of their regular operating processes. Ongoing monitoring occurs during normal operations and includes regular management and supervisory activities, comparisons, reconciliations, and other actions people take in performing their duties. GAO's Internal Control Standards: Internal Control Management and Evaluation Tool (GAO-01-1008G, August 2001) Ineffective oversight of the purchase card program also contributes to weaknesses in the overall control environment. In general, effective oversight activities would include management reviews and evaluations of how well the purchase card program is operating, including the internal control activities. We identified little monitoring or oversight activity directed at assessing program results, evaluating internal control, or identifying the extent of potentially fraudulent, improper, and abusive or questionable purchases. At no management level, Army headquarters, major command, or local installation, is the infrastructure provided for such activities. At the installation level, where the most responsibility for oversight appears to reside, guidance or training on what oversight activities should be undertaken does not exist, and the needed human capital resources to perform those activities are not in place. At the Army-wide level, the purchase card agency program coordinator— the position involving direct oversight of the Army program—does not conduct internal control oversight activities. The agency program coordinator, who is in the DOD Joint Purchase Card Program Office, has no human capital resources to conduct oversight activities. The coordinator’s activities are mainly directed at answering program operation questions from and transmitting reports to major command and installation-level program coordinators. The major commands have direct authority over the installations that report to them and have responsibility for the purchase card programs of their installations. While the major commands that we audited had procedures to guide the installations’ activities, we found little evidence of oversight activities by the commands to monitor the installations’ implementation of the procedures. The major commands’ purchase card program office personnel do participate in contract management reviews conducted at their installations every 2 years. These reviews, which generally are completed in 1 week, are focused on the installation’s contracting operations and have a small purchase card component. The program coordinators at the major commands we audited confirmed that they conduct little oversight of internal control activities at the local installation programs. The only significant oversight activities we identified were at the local installation level where the primary purchase card activities are taking place. However, none of the installations we audited had a comprehensive or effective program of oversight and monitoring. The oversight and monitoring activities consisted primarily of isolated inspections of approving official’s compliance with monthly statement certification requirements and monitoring resolution of disputed transactions. Audits and inspections of the purchase card program by internal auditors can provide additional oversight of the installation level purchase card program. For example, at the Soldier, Biological and Chemical Command – Natick, where the command has recognized that the program coordinator did not have the infrastructure to perform oversight reviews, the internal auditor provided assistance. According to the auditor, the audits are designed to ensure continued command attention and to assist the program coordinator with developing policies, procedures, and controls. However, at the installations we visited, audits and inspections were generally limited in both scope and number. For example, at Fort Hood, the internal auditors conducted occasional purchase card reviews as part of the command inspection program. Although these inspections occasionally surfaced control problems, the results were not communicated to the purchase card program coordinator so that systemic problems could be identified and addressed. The DOD Financial Management Regulation assigns installation program coordinators the responsibility for the implementation and execution of the purchase card program in accordance with established Office of the Secretary of Defense and applicable DOD component regulations, policies, and procedures. Thus, installation program coordinators, who act under the direction of the installation’s director of contracting, are the pivotal officials in managing and overseeing the purchase card program. A comprehensive and robust management and oversight program could include a number of activities. At the installations that we audited, the program coordinators were devoting significant time and attention to some basic activities such as establishing cardholders and approving officials and providing required training to these individuals. In most cases, cardholders and billing officials were being appropriately established and were receiving the required initial training. However, we found that refresher training, required by DOD guidance for cardholders and approving officials every 2 years, was seldom provided at the five installations we audited. Program coordinators at every location except the Texas Army National Guard in Austin, Texas, told us that this training seldom, if ever, occurred because of inadequate time and human capital resources. While devoting time and resources to establishing cardholders and approving officials, other important activities were not receiving attention. For example, the key oversight activity identified in Army regulations is an annual review of the records of approving officials. This key activity was not effectively carried out at any of the five installations. In addition, program coordinators were not monitoring potential abusive and questionable transactions, and taking prompt and appropriate action to cancel accounts for departed and unneeded cardholders. Inspecting approving official activities. Army guidance, reiterated in an August 2001 memorandum, provides for the installation’s program coordinator to annually inspect the records of approving officials. Our work showed that none of the program coordinators at the five installations had a comprehensive inspection program, although three program coordinators had conducted some inspections. Our work also showed that the few that had been conducted were focused on only a limited number of cardholders and did not include remediation plans. Without inspecting cardholders’ and approving officials’ activities and developing remediation plans, program coordinators had no structured way to determine either currently or over the long run how well their approving officials were functioning or to follow up their inspections and determine whether cardholders and approving officials had improved their performance. The following summarizes the ineffective and limited information on inspections of approving officials’ records at the audited installations. At Eisenhower Army Medical Center, the program coordinator performed a few targeted inspections in fiscal year 2001, rather than undertake a comprehensive audit of approving officials’ activities. These inspections covered eight approving officials and 15 cardholders. At Fort Benning, the program coordinator told us that records of a few approving officials are inspected each year but there is no specific timetable for the inspections and the results are not documented. An internal audit of the purchase card program at Fort Benning prepared for the commanding general in 2001 concluded that the program coordinator had not placed enough emphasis on oversight responsibilities. At Fort Hood, the program coordinator conducted few inspections of approving officials’ activities due to a heavy workload in establishing cardholders and approving officials and limited human capital resources. At the Soldier, Biological and Chemical Command - Natick, the program coordinator did not conduct inspections of approving officials’ activities. However, internal review performed audits focused on various purchase card areas to assist the program coordinator. An April 2001 internal review audit report of the Texas Army National Guard program stated that there was no evidence reviews were conducted to test management controls over the purchase card program. Subsequent to the audit report, the program coordinator and the director of contracting said that they had begun to occasionally conduct a small number of reviews. Monitoring potentially abusive and questionable transactions. Program coordinators at the five installations have not routinely monitored potentially abusive transactions. Their activities in this area were generally confined to answering cardholder questions about potentially questionable aspects of proposed purchases and occasionally scanning bank data for questionable transactions. The program coordinators told us that the Army and major command purchase card offices do not require them to analyze purchase card transactions and have not provided guidance on data to be analyzed or on analysis techniques. Our own data mining efforts, including our analysis of Army-wide data, shows the usefulness of these techniques and their potential for identifying transactions that contain indicators of potentially fraudulent, improper, and abusive and questionable transactions, as we discuss in a later section of this report. While cardholders and approving officials are the first line of defense in preventing purchase card abuse, program coordinator activities become especially critical if the approving official is not carrying out required duties. For example, after noting that we were requesting additional details on purchases from some questionable vendors, the Fort Benning program coordinator noticed that a cardholder had purchases from such vendors. Subsequent investigation of the cardholder revealed potentially fraudulent purchases totaling $10,748. The cardholder’s potentially fraudulent activities were not detected promptly because the approving official had not been monitoring the cardholder’s purchases or reviewing the monthly statement. Program coordinators, in addition to analyzing questionable transactions, need to analyze other purchase card data, such as bank status reports on disputed transactions. The Fort Hood program coordinator, who was not effectively monitoring bank status reports on disputed transactions, did not identify that cardholder inaction beyond the expiration date for disputes had resulted in the loss of ability to recover funds on previously disputed charges. At our suggestion, the coordinator followed up on an unresolved expired dispute and obtained credit for over $1,000 in returned unordered merchandise. Such a recovery demonstrates that a data analysis program for installation program coordinators can produce savings for taxpayers. Canceling accounts for departed cardholders. None of the program coordinators at the five installations had focused effective attention on canceling accounts of departed and unneeded cardholders prior to the completion of our fieldwork. Program coordinators can reduce the government’s exposure to fraud, waste, and abuse by monitoring cardholder account activity and determining whether issued cards continue to be required. If cards are not active and unneeded because of change in duties or other reassignments, timely cancellations of cards is an important control. At all five installations, we identified weaknesses in their processes for canceling accounts for inactive or unneeded cardholders, and each location had significant numbers of cards that were inactive and should have been canceled. The most serious problem was when some accounts had not been canceled even though the cardholder was no longer at the installation or even with the Army. Each installation had a policy that the program coordinator be notified when an account should be canceled, but they were not effective. Even the existence of processes to identify when a cardholder’s account should be canceled were not always effective. For example, the Soldier, Biological and Chemical Command - Natick had developed a process to terminate the purchase card when the cardholder departed. The process involved a checkout procedure that required each departing cardholder to obtain a release from the program coordinator prior to being allowed to leave the installation. Yet, even with this process in place, the installation had 20 inactive cards that needed to be canceled. Although the checkout process had been developed, data had not been analyzed to evaluate if the process was effective. This problem of unneeded cards was especially serious at Fort Hood, which also had a checkout process that included the purchase card. Available data showed that 317—26 percent—of 1,242 current cardholders at Fort Hood were no longer assigned to the units that issued their cards. Therefore, the cards should have been terminated. Neither the installation nor purchase card program office had established processes to ensure that purchase cards of departing or reassigned personnel were canceled. As identified later in this report, failure to terminate cards of reassigned cardholders can result in potentially fraudulent transactions. The problem at Fort Hood was exacerbated by the high turnover of active duty military personnel rotating to and from the installation—Fort Hood military personnel statistics show that about 1,600 soldiers depart the installation monthly. The personnel office managing the transfers of military personnel had established checklist procedures to cancel government travel cards held by departing employees, but did not have similar procedures for canceling purchase cards. They said that the procedure was not established because the social security numbers of cardholders are not provided for computerized matching to the social security numbers of departing soldiers. Fort Hood’s purchase card program office agreed to review records and cancel cards for reassigned personnel and to identify workaround procedures to ensure that cards of departed personnel were terminated. Following the completion of our fieldwork, Fort Hood program officials notified us in mid-May that they had canceled 258 cardholder accounts and were continuing to identify other accounts for cancelation. The command said it was also attempting to improve its checkout procedures. The above conditions illustrate that the Army as a whole may also need to reduce its active cards. Since our testimonies and report on the purchase card programs at two Navy locations, there has been concern over whether DOD has too many purchase cards and cardholders. Since that time, the Navy reports that it has reduced its total active cards from about 58,000 to about 26,000. With the Army’s lack of guidance to installations on controlling the issuance of cards and on reassessing the need for outstanding cards, the Army should also have opportunities to reduce its reported 109,000 active cards. Army officials reported that as of April 30, 2002, they had reduced the number of active accounts to about 100,000 and would continue to assess the need for cards. DOD, Army, and the major commands we audited have not provided installation-level program coordinators the infrastructure needed for program monitoring and oversight. The coordinators do not have guidance or training on what they should be doing to monitor and oversee the implementation of internal control activities, and they have not been trained. They do not have the human capital resources to perform significant monitoring and oversight activities. And finally, they do not have grade-level positions that are commensurate with their responsibilities and that would provide some additional authority to achieve better purchase card internal control. No program guidance or training. Although installation-level program coordinators are tasked with major program management responsibilities, applicable DOD, Army, and major command guidance does not provide a statement of duties, position description, or other information on the scope, duties, or specific responsibilities for the position. The guidance also does not establish program coordinators’ oversight responsibilities. The Army and major command guidance to installation-level program coordinators is generally limited to a requirement that program coordinators review each approving officials’ records and activities annually. The Army and major commands also have not developed data analysis techniques and tools for installation-level program coordinators to use in analyzing bank electronic data as a part of their oversight activities. Also, the Army and major commands have not developed training courses for program coordinators. At the five audited installations, the coordinators told us they had not received any specific program coordinator training. They said the available training was limited to cardholders training sessions either on-line or conducted by other coordinators and the General Services Administration’s annual governmentwide purchase card program conference. Thus, program coordinators essentially have had to develop program management and oversight activities and to decide how to conduct them. Inadequate human capital resources. The Army has not provided sufficient human capital resources at the installation level to enable monitoring of purchases and develop a robust oversight program. The two key positions for monitoring purchases and overseeing the program are the program coordinator and the approving official. While the program coordinator position is a specifically designated responsibility, we found that the coordinator has very limited assistance in administering, managing, and overseeing the program. At the five installations that we audited, the assistance available to the program coordinator ranged from no staff at two locations to one full- time assistant at two locations. Considering that the coordinators are responsible for procurement programs involving thousands of transactions and millions of dollars, as shown in table 1, the inadequacy of human capital resources is apparent. The Army does not have guidance on the appropriate human capital resources for the program coordinator’s office. However, the program coordinators told us, and our observations confirmed, that with current resources, time was not available to conduct systematic reviews of approving officials’ activities, much less undertake other management analyses and oversight activities. They each said that their time was generally consumed with administrative duties such as training new cardholders, issuing appointment letters, setting up accounts for new cardholders, monitoring delinquencies, interacting with the bank to resolve problems, and interacting with cardholders to answer questions about the purchase card program. These administrative activities are necessary to operate the purchase card program, but do not achieve routine oversight of activities. As previously discussed, the Director, Purchase Card Joint Program Management Office, recognized in his July 5, 2001, memorandum, the need to assess the adequacy of resources and asked that the services conduct an assessment of the policies and guidelines that are in effect to assist commanders and directors in the proper allocation of resources to the purchase card program. He asked that the assessment be conducted in the coming weeks (emphasis added). However, the Army program office could not identify any such assessments. By the end of our fieldwork, the five installations and five major commands included in our work had not conducted any studies or assessments to address the question of appropriate resources. As opposed to the specifically designated role of the program coordinator, approving official responsibilities generally fall into the category of “other duties as assigned,” without any specific time allocated for their performance. We found that approving officials generally had many other duties of a higher priority than monitoring purchases and reviewing their cardholders’ purchase card statements. Also, many approving officials are responsible for a large number of cardholders. A large workload, especially one in an “other duties as assigned” category can inevitably lead to less attention than expected or desired. We found that a number of approving officials at the installations we visited had numerous cardholders reporting to them. For example, at Fort Hood, 29 billing officials had 10 or more cardholders. Two of the 29 had over 20 cardholders. At Eisenhower Army Medical Center, one approving official had 18 cardholders, one of whom was spending about $100,000 per month for surgical supplies and equipment. The approving official said he simply did not have time to review each cardholder’s monthly bills and transactions each month. At the Texas Army National Guard, 16 of the 26 approving officials had 10 or more cardholders, and 8 of them had 25 or more. The number of cardholders that these approving officials were responsible for far exceeded the Army’s suggested maximum of 7 cardholders per approving official, as discussed earlier. The DOD Inspector General also reported a problem with approving officials having too many cardholders. In a March 2002 report on the purchase card program, the Inspector General reported that 1,816 approving officials, or 8.8 percent off the Army’s 20,709 officials, were assigned more than 7 cardholders and that 21 of them were assigned more than 100 cardholders. A large span of control for approving officials is not conducive to thorough review of each cardholder’s monthly statement. The August 3, 2001, memorandum from the Acting Deputy Assistant Secretary of the Army (Procurement) to the contracting community cited earlier stated that “ officials are the first line of defense against fraud, waste, and abuse, as they are required to review each of their cardholder statements. If they have too many cardholders under their purview there is no way these officials can perform the required reviews and attendant certifications of cardholder purchases.” In the February 1, 2002, memorandum cited earlier from the Director of the Purchase Card Joint Program Management Office to agency program coordinators, the director requested program coordinators’ help in ensuring that approving officials’ span of control is commensurate with their ability to adequately perform their responsibilities. The memorandum said that approving officials should have a reasonable span of control over the cardholders they supervise and approving officials must be given adequate time for a complete monthly review to determine that each charge is legal and proper. Following completion of our fieldwork, the installations we audited reported that they had begun to bring their approving officials’ span of control into line with the criteria. Insufficient authority to encourage compliance. The program coordinators at the five installations we audited generally did not have the grade level or organizational authority—“clout”—to enforce compliance with purchase card procedures. At the five installations we audited, the program coordinators were part of the installation’s contracting operation and reported to the director of contracting, from whom they derived their authority. However, we believe that the program coordinators’ grade levels were not commensurate with their responsibilities or sufficient to provide the authority needed to enforce purchase card program rules. Only one of the five was a GS-12, two were GS-9s, and two were GS-7s. Program coordinators have the primary responsibility for purchase card program management and significant control over procurement activities carried out by a large number of individuals. For example, table 1 shows that the Fort Hood program coordinator has responsibility for overseeing a program of over 110,000 purchase card transactions totaling about $58 million and carried out by 321 approving officials and 1,242 cardholders. In addition to the relatively low grades, the Army has not made the program coordinator position career enhancing by making it part of a contracting career path. At three of the five installations, program coordinator position descriptions were for traditional contracting positions, although their coordinator duties are unique. Two coordinators had locally developed position descriptions that included their coordinator responsibilities, but these descriptions still carried traditional contracting titles. At the Soldier, Biological and Chemical Command – Natick, the program coordinator’s position description, which was written to justify a GS-12 grade, was for a procurement analyst and specifically included program coordinator duties. However, the director of contracting said that obtaining approval for the position took much discussion and persuasion because of its uniqueness. At Fort Benning, the program coordinator’s position description was developed specifically for the local position. Internal control activities help ensure that management’s directives are carried out. The control activities should be effective and efficient in accomplishing the agency’s control objectives. GAO's Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999) Our work shows that critical internal control activities and techniques over the purchase card program were ineffective at the five installations we audited. Based on our tests of statistical samples of transactions, we determined that the transaction-level control activities and techniques we tested were not effective, rendering purchase card transactions at the five installations vulnerable to potentially fraudulent and abusive purchases and theft and misuse of government property. Control activities occur at all levels and functions of an agency. They include a wide range of diverse activities such as approvals, authorizations, verifications, reconciliations, performance reviews, and the production of records and documentation. For the Army purchase card program, we opted to test those control activities that we considered to be key in creating a system to provide reasonable assurance that transactions are correct and proper throughout the procurement process. The key control activities and techniques we tested include advance approval of purchases, independent receiving—receiving and acceptance of goods and services by someone other than the cardholder, independent review by an approving official of the cardholder’s monthly statements and supporting documentation, and cardholders obtaining and providing invoices that support their purchases and provide the basis for reconciling cardholder statements. Table 2 summarizes the results of our statistical testing. Our work showed internal control activity failures in both purchase and payment cards, although the percent of failure—the failure rate—was generally higher for purchase card transactions. In addition to the internal control activities we tested statistically, we noted two other internal control-related problems during our work. First, the purchase card exacerbates the long-standing difficulties of maintaining property records over accountable property. Second, cardholders did not always maintain purchase card transaction records as required by regulations. Key duties and responsibilities need to be divided or segregated among different people to reduce the risk of error or fraud. This should include separating the responsibilities for authorizing transactions, processing and recording them, reviewing the transactions, and handling any related assets. Simply put, no one individual should control all the key aspects of a transaction or event. GAO's Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999) Without Army-wide operating procedures, requirements for advance approval are not consistent but do exist, to some extent, at each of the five audited installations. Two major commands and three installations specifically require advance approval. Others required written descriptions of purchases and appropriate coordination and review prior to the purchases. Advance approval requirements also varied within individual units at the installations and by individual approving officials. The requirements were generally for informal approval directed toward ensuring budget and funds control as well as establishing a valid need for a purchase so that cardholders are not acting totally independently. The approvals that we saw included e-mails from a cardholder’s supervisor as well as a request for a purchase initiated by someone other than the cardholder. For example, the Soldier, Biological and Chemical Command – Natick used an electronic system to manage its purchase card activity, making it easy for Natick employees to request a cardholder to make a purchase and for supervisors, unit heads, resource managers, and logistics personnel to have knowledge of and approve the request with a few computer keystrokes. Approval of a purchase can range from a blanket approval for routine small dollar purchases of items such as office supplies to a one-time written approval for specific large dollar items. For example, at Fort Hood, some units have a blanket approval for routine, small dollar purchases, such as office supplies under $300. For our testing of advance approval, we accepted reasonable documented evidence that a cardholder’s supervisor or other responsible person had requested and/or approved the purchase. This included a request for purchase from a responsible official, and it also included specific blanket approval for routine purchases within set dollar limits. As table 3 shows, we estimated that the failure rate at the five installations ranged from 25 percent at Soldier, Biological Chemical Command – Natick to 69 percent at the Texas Army National Guard. Although the failure rate was unacceptably high overall, the failure rate was particularly high for micropurchases, even though some of those purchases were for computers, electronic devices, and other items for which advance approval would appear warranted because the procurement was not routine. We believe that leaving cardholders solely responsible for a procurement without some type of documented approval puts the cardholders at risk and makes the government inappropriately vulnerable. A segregation of duties so that someone other than the cardholder is involved in the purchase improves the likelihood that both the cardholders and the government are protected from fraud, waste, and abuse. We believe that advance approval is an appropriate internal control activity, especially considering that many cardholders in our audit were administrative personnel and not supervisors or managers. Our testing of advance approval as a control activity is not advocating a return to the formal advance approval that DOD has de-emphasized in the purchase card program. A February 1997 study of the purchase card program identified DOD’s requirement for formal prepurchase approval documentation through the administrative chain of command for each purchase card transaction as an impediment to expanded use of the purchase card. The formal prepurchase documentation that previously existed could impede purchases and increase costs. The more informal practices that exist at the installations we audited eliminate much of the previous formal documentation, but still can serve to protect the cardholders and the government. For example, blanket approval for routine purchases within set dollar limits involves minimal cost, but reasonable control. For nonroutine purchases involving significant expenditures, advance approval, even through informal processes, appears to be an important control activity. Key duties and responsibilities need to be divided or segregated among different people to reduce the risk of error or fraud. This should include separating the responsibilities for ... handling any related assets. Simply put, no one individual should control all the key aspects of a transaction or event. GAO's Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999) Independent receiving—receiving of goods and services by someone other than the cardholder—provides additional assurance that purchased items are not acquired for personal use and that the purchased items come into the possession of the government. The requirement for documentation of independent receiving by someone other than the cardholder was not generally addressed in the procedures of the commands and installations we audited. However, installations and units within the installations often required some documentation of independent receiving of at least some portion of their purchases. At Fort Benning, instructions in various units required documentation of independent receiving. At Fort Hood, the Department of Public Works had established the same type of requirement. The Fort Hood official in the department told us he established the requirement for independent receipt because, while in a prior job at another installation, he had observed potentially fraudulent purchases that would have been prevented if the independent receipt requirement had existed. Because Army guidance does not address the issue of evidence of independent receiving, and the requirements varied at the five installations, we accepted as evidence of independent receiving for this test, any signature or initials of someone other than the cardholder on the sales invoice, packing slip, bill of lading, or other shipping or receiving document. Table 4 shows the results of our testing. As shown above, the five installations we audited generally did not have independent, documented evidence that the items ordered and paid for with the purchase card had been received. This lack of documented, independent receiving extended to all types of purchases, including computers and other expensive or highly pilferable items. We believe that documented independent receiving is a basic internal control activity that provides additional assurance to the government that purchased items come into the possession of the government. Transactions and other significant events should be authorized and executed only by persons acting within the scope of their authority. This is the principal means of assuring that only valid transactions to exchange, transfer, use, or commit resources and other events are initiated or entered into. GAO's Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999) Control activities ensure that only valid transactions … are initiated or entered into …. Control activities are established to ensure that all transactions … that are entered into are authorized and executed only by employees acting within the scope of their authority. GAO’s Internal Control Standards: Internal Control Management and Evaluation Tool (GAO-01-1008G, August 2001) Approving official review is a recognized control activity at all levels of the purchase card program. DOD’s purchase card joint program office, major command procedures, and the installations’ operating procedures recognize that the approving official review is central to ensuring that purchase card transactions are appropriate. Army guidance requires approving officials to review and certify each cardholder’s monthly transactions. The August 3, 2001, memorandum discussed earlier described the approving official review process as the “first line of defense” against misuse of the card. The responsibilities of the approving official involve two overlapping functions: reviewing the cardholder’s transactions to provide reasonable assurance that, among other things, (1) the transactions are legal, proper, and correct in that appropriate procurement procedures were followed and (2) supporting documentation and records, including supporting invoices, are adequate and certifying the cardholder’s transactions for payment. An appropriate approving official review, at a minimum, would facilitate certification; however, certification by itself does not ensure that the desired review occurred. Certification is likely to occur even if the required reviews are not made because certification is necessary for payment. Section 2784 of title 10, United States Code, requires the Secretary of Defense to issue regulations controlling the use of government credit cards within the department. The statute requires that these regulations be consistent with “regulations that apply government-wide regarding use of credit cards by government personnel for official purposes.” The regulations that apply governmentwide are in the Treasury Financial Manual. Section 4535 of Volume I of the manual provides that the cardholder and approving official will review the cardholder statement of account received at the end of each monthly billing cycle. The cardholder statement must be submitted to the billing office early enough to permit the billing office to process and pay the consolidated monthly invoice within the Prompt Payment Act deadline. The provision directs the billing office to pay the consolidated invoice on time, “even if all cardholder statements are not received….” As part of our work, we asked the DOD Under Secretary of Defense (Comptroller) for his views on DOD’s compliance with these statutory requirements. In a letter dated April 30, 2002, the Principal Deputy and Deputy Under Secretary of Defense for Management Reform stated that DOD's Financial Management Regulation, various purchase card reengineering memorandums, and other pronouncements together complied with section 2784. DOD’s regulations are consistent with the governmentwide regulations regarding the responsibilities of cardholders and approving officials. Therefore, if cardholders and approving officials are not reviewing and reconciling their statements of account in time for disbursing offices to process payments on time, they are not complying with Treasury and DOD requirements. We noted numerous cases during our audit where the approving official certified the billing statement for payment but had not examined the transactions or the documentation supporting them to determine whether the transactions were correct and for a valid government purpose. In one case, a note on one approving official’s certified billing statement said that the approving official had not reviewed the transactions. Accordingly, certification for payment is made without the required reconciliation. In that instance, certification was clearly nothing more than a “rubber stamp.” Consequently, we tested for other evidence that the billing official had reviewed the cardholders’ transactions. Without such evidence, neither we, nor internal auditors, nor program coordinators who are required to annually review approving official’s records, can determine whether approving officials are complying with review requirements or simply certifying the statement without the required review. For this test, we accepted virtually any markings, notes, or dates, other than the certification signature, on the transactions listed on the cardholder’s or approving official’s bill as documentation that a review had occurred. In instances of appropriately documented reviews, we found evidence of the approving official checking off on each transaction in the cardholder’s statement and the supporting documentation for each, and signing the cardholder’s statement as having reviewed it. Instances in which the documentation was not available included missing statements, missing invoices, and statements without any marks by either the cardholder or the approving official to indicate that a reconciled statement had been prepared or submitted to the approving official. Our testing revealed that documented evidence of approving officials’ review of cardholders’ transactions and their reconciled statements did not exist for most of our sample transactions. The failure rate at each of the five installations we audited was high, as table 5 shows. The high failure rate is of particular concern for this control activity because it is perhaps the most important to providing reasonable assurance that purchases are appropriate and for a legitimate government need. Although of concern, the high failure rates are not unexpected because major command and local standard operating procedures, while recognizing the importance of approving official review, do not specify the required extent, content, or documentation of approving officials’ reviews. In addition, the high failure rate may be attributable to approving official responsibilities falling into the category of “other duties as assigned” and to approving officials being responsible for a large number of cardholders, as previously discussed. A large workload, especially one in an “other duties as assigned” category, can inevitably lead to less attention than expected or desired. For example, the previously mentioned cardholder at Eisenhower Medical Center, who was the approving official for 18 cardholders, one of whom spends about $100,000 monthly for surgical supplies and medical equipment, told us that he had not reviewed the cardholders’ records because he did not have time. We examined that cardholder’s records as part of our control activity testing and found that the records were in disarray. Numerous transactions did not have invoices. Other transactions had invoices with prices that differed from the cardholder’s log but were not reconciled. Subsequent to our audit, the program coordinator worked with the approving official’s manager to reduce the workload by appointing additional approving officials. Our discussions with approving officials indicated that some reviews had been made, but we could not determine the frequency or extent of the reviews because they were not documented. Without documentation, the lack of a review can go unnoticed. For example, at the Texas Army National Guard, approving officials’ subordinates frequently performed cardholder statement reviews because of the large number of cardholders for whom each approving official was responsible. When one of these subordinates was absent due to an extended illness, no one performed reviews of the transactions, but the approving official did not notice because reviewers were not required to document their work. According to guard officials, this problem was addressed after our inquiries by appointing more approving officials and directing approving officials to personally review their cardholder transactions. We identified numerous instances of purchases that clearly had not been adequately reviewed and reconciled to the statement, but the statements were, nonetheless, certified for payment. Such activities allow potentially fraudulent, improper, abusive, and questionable purchases, which are discussed in more detail in the following section of this report, to go undetected. The following are two example of such unauthorized charges that we identified. A Fort Hood cardholder purchased 15 wire storage containers in April 2001. The vendor incorrectly included $808 of shipping and handling charges in the $2,748 bill. The approving official certified the statement for payment including the erroneous shipping and handling charges. Apparently, neither the cardholder nor the approving official reviewed the transaction in sufficient detail. After we detected the erroneous charges in November 2001, about 7 months after the original charge, a refund was obtained. Another approving official at Fort Hood certified for payment a $539 charge on a May 2001 statement for a purchase from a catering company. After our inquiry about an invoice for the purchase, the approving official determined that the charge was inappropriate and a refund was made. Approving official review of a reconciled statement should have detected this inappropriate charge. We believe that the approving official’s review of the cardholders’ purchases is a vital internal control activity. Without documentation of such review, neither we, internal auditors, nor program coordinators can determine the extent that the approving official is carrying out review responsibilities. Internal control and all transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination. All documentation and records should be properly managed and maintained. GAO's Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999) Essentially, the Army requires that an invoice support purchase card transactions. Thus, the invoice is a key document in purchase card internal control activities. Throughout the major commands’ and installations’ procedures, the need for obtaining and retaining an invoice is recognized. Without an invoice, independent evidence of the description and quantity of what was purchased and the price paid is not available. In addition, the invoice is the basic document that is required to be attached to the cardholder’s monthly statement during a cardholder’s reconciliation and prior to approving official review. In testing for evidence of an invoice, we accepted either the original or a copy of the invoice, sales slip, or other store receipt. Table 6 shows the results of our testing. The following missing invoice example illustrates the questions that can arise when an invoice is not available. As part of our Army-wide data mining, we identified several types of vendors that cardholders are generally prohibited from using. We identified four transactions for which the monthly billing indicated that purchases were made at a jewelry store— one category of prohibited vendors—in Kuwait for three purchases totaling $4,365 and a credit for returned merchandise of $1,353. Upon inquiry into this transaction, Army officials said that the purchase was for mattresses for a vessel prepositioned in the area. However, they also said that the transaction file did not contain a detailed invoice to allow us—or the approving official who was located in the United States—to confirm that mattresses were, indeed, the merchandise purchased, and if so, how many and at what unit price. Without such an invoice, a thorough investigation is needed to determine whether this transaction was proper, potentially fraudulent, improper, or abusive. The failure rates for evidence of invoice were lower than those for the other internal control activities we tested. However, we believe that even these failure rates are unacceptable for such a key document. A valid invoice to show what was purchased and the price paid is a basic document for the transactions and a missing invoice is an indicator of potential fraud. Without an invoice, two key control activities—independent receiving and approving official review—become ineffective. Independent receiving cannot confirm that the purchased items were received and the approving official cannot review a cardholder statement reconciled with the supporting invoice. A near zero failure rate is a reasonable goal considering that invoices are easily obtained or replaced when inadvertently lost. An agency must establish physical control to secure and safeguard vulnerable assets. Examples include security for and limited access to assets such as cash, securities, inventories, and equipment which might be vulnerable to risk of loss or unauthorized use. Such assets should be periodically counted and compared to control records. GAO's Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999) Consistent with GAO’s internal control standards, DOD’s Property, Plant and Equipment Accountability Directive and Manual, which was issued in draft for implementation on January 19, 2000, requires accountable property to be recorded in property records as it is acquired. In addition to high-cost property items, accountable property also includes easily pilferable or sensitive items, such as computers and related equipment, cameras, cell phones, and power tools. Recording these items in the property records is an important step to ensure accountability and financial control over these assets and, along with periodic inventory, to prevent theft or improper use of government property. At each of the five installations we visited, we found that accountable items acquired by purchase cards were not recorded in property records. In addition, officials at four of the five installations could not readily locate property items. While some of the items were located after considerable searching, others such as computers and printers were not. Some or all of the items might, in fact, be at the installation; however, without positive assurance, there is substantial risk that items were converted to personal use or sold. Property items not recorded in the property books and not found demonstrate a weak control environment and problems with the property management system. Table 7 shows the results of our work. Effectively managing accountable property has long been a problem area and the use of the purchase card has added further difficulties. With over 100,000 army cardholders, the number of people buying accountable property has greatly expanded. Cardholders are responsible for reporting on the accountable property they buy so that it is recorded in the installation’s accountable property, but they often do not. For example, property book officers at Fort Hood and the Texas Army National Guard told us that a major problem with property bought in a purchase card transaction is that cardholders do not properly notify property book officers and/or provide documentation supporting the purchases. At Fort Hood, cardholders are required by the installation’s purchase card procedures to obtain transaction document numbers for purchases of equipment items prior to making the purchases, but the requirement is frequently ignored. Further, we noted that the installations we audited generally did not record items such as memorabilia like pictures of famous people and framed jerseys of sports stars. Some of these items cost hundreds of dollars and are pilferable and desirable items. Because of its long-standing problems, property management has been the subject of internal audits at the installations we audited. At the Soldier, Biological and Chemical Command - Natick, as a result of an internal audit of property accountability, the logistics office had worked for over a year to improve its management of accountable property. We believe that the attention focused on accountable property management was the reason that the installation had the best result in our audit. Others had not done so well in correcting their problems. A Fort Benning internal audit completed in April 2001 found that 84 percent of the accountable items purchased with a purchase card had not been recorded on the property book. An ongoing internal audit by the Texas Army National Guard was finding similar property accountability problems. At Eisenhower Army Medical Center, an evaluation of the center’s logistics operations estimated that $2 million to $5 million of accountable property acquired with the purchase card was not on the center’s property books. Cardholders have little incentive to undertake the required coordination and reporting on property items because of the additional work involved. Our work showed that items received centrally by logistics officials are more likely to be recorded on the property books. Thus, central receiving appears to help mitigate against cardholders not assuring accountable property is recorded and may be worth pursuing across the board or for certain asset types. In addition, we believe that robust monitoring and oversight activities of the purchase card program that include examining how well cardholders are fulfilling their property management responsibilities could help improve property management related to the purchase card program. Internal control and all transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination ... All documentation and records should be properly managed and maintained. GAO's Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999) During our work we noted several instances in which cardholders and approving officials had not maintained purchase card transaction files for 3 years as required by the Federal Acquisition Regulation, Part 4, Section 4.805. In our testing, the records most often missing were the ones for cardholders who had left the installation. Either the cardholders destroyed the records prior to leaving or the replacement cardholder destroyed them because they were not the new cardholder’s records. At Eisenhower Army Medical Center, a replacement cardholder destroyed the departed cardholder’s files because the office had little room to store old files and the new cardholders did not see the need to store someone else’s files. In some cases, we were told that the departed cardholders took the records with them to their new installations. In other cases, the records were lost when units were deployed. Regardless of the causes, the records were not available for our inspection and records retention requirements were not complied with. In those instances, we could not document that internal control activities had been carried out. Although we found no concrete indications of fraud in these situations, the lack of records raises concerns about whether the files were destroyed so that potentially fraudulent, improper, or abusive transactions were not documented. For example, in one case in which the cardholder had left the Army, we found charges during the last month of the cardholder’s military service from the installation’s liquor store and vendors such as Wal- Mart stores that sell a multitude of potentially personal items. The replacement cardholder told us that the purchases were probably for a unit party, but the timing of the purchases along with missing documentation does not allow ruling out the possibility that items may have been bought for personal use. However, the unit’s purchase card records for this period were in disarray and invoices and other documentation that could verify items purchased or aid further assessments of the propriety of the purchases were not available. Buying items with purchase cards without the requisite control environment creates unnecessary risk of excess outlays, which can range from outright fraudulent purchases to ones that were of questionable need for the unit’s mission or were unnecessarily expensive. We identified purchases at the installations we audited that were potentially fraudulent, improper, and abusive or questionable, which can result from a weak control environment and weak internal control activities. As discussed in appendix I, our work was not designed to identify, and we cannot determine, the extent of potentially fraudulent, improper, and abusive or otherwise questionable transactions. However, considering the control weaknesses identified at each installation, such transactions are likely occurring and have not been detected. In addition to the purchases identified at the audited installations, our Army-wide data mining of selected transactions identified additional cases of potentially fraudulent, improper, and abusive or questionable transactions. The Army has no information as to the extent of potentially fraudulent purchases that have been identified or are being investigated within the purchase card program. We identified instances of potentially fraudulent transactions at three of the five installations we audited and in our Army- wide data mining, as table 8 shows. Some of the potentially fraudulent transactions were identified in response to our inquiries. Others were identified or being investigated independent of our audit. We considered potentially fraudulent purchases to include those made by cardholders that were unauthorized and intended for personal use. Potentially fraudulent purchases can also result from compromised accounts in which a purchase card or account number is stolen and used by someone other than the cardholder to make a potentially fraudulent purchase. Potentially fraudulent transactions can also involve vendors charging purchase cards for items that cardholders did not buy. The installations we audited had policies and procedures that were designed to prevent and/or detect potentially fraudulent purchases, such as the requirement that approving officials review the supporting documentation for each transaction for legality and proper government use of funds. However, as discussed earlier, our testing showed that these control activities had not been implemented as intended. Although collusion can circumvent what otherwise might be effective internal control activities, a robust system of guidance, internal control activities, and oversight can create a control environment that provides reasonable assurance of preventing or quickly detecting fraud, including collusion. However, in auditing the Army’s internal control at five installations during fiscal year 2001, we did not find the processes and activities that provide such assurance. The following examples of fraud illustrate the cases in table 8. At Eisenhower Army Medical Center, an Army investigation initiated near the end of our work has revealed an estimated $100,000 of potentially fraudulent purchases. The investigation began when an alternate cardholder received an electronic game station that had been ordered by another cardholder who was away on temporary duty. The alternate cardholder, noting that the purchase did not appear to be for government use, notified the program coordinator who notified the local Army criminal investigations division. The ensuing investigation revealed that the military cardholder, approving official, and several other soldiers and civilians colluded to purchase numerous items including computers, digital cameras, an audio surround system, a 32- inch television, a stereo system, and other items for personal use. A Fort Benning military cardholder charged $30,000 for personal goods and cash advances before and after retirement. Because these 178 transactions went undetected, it appears that the approving official’s certification was only a “rubber stamp” and was not based on a review of the cardholder’s bill, reconciliations, and supporting documentation. The approving official not only failed to detect these potentially fraudulent transactions while the cardholder was on active military duty, but also failed to notice that charges were continuing to be made after the cardholder retired. At Eisenhower Army Medical Center, a military cardholder defrauded the government of $30,000 from April 25 to June 20, 2001. The cardholder took advantage of a situation when the cardholder’s approving official was on temporary duty for several months. The cardholder believed that the alternate approving official would certify the statement for payment without reviewing the transactions or their documentation. With this belief, the cardholder purchased a computer, purses, rings, and clothing. These fraudulent transactions were not discovered until the resource manager who monitored the unit’s budget noticed a large increase in spending by the cardholder. The cardholder had destroyed all documentation for the 3-month period during which these transactions took place. However, investigators found merchandise and invoices that showed the cardholder had used the government credit card. The cardholder was court-marshaled in April 2002 and sentenced to 18 months incarceration. These fraudulent transactions might not have occurred if the cardholder had known that the approving official would review the transactions. At a minimum, prompt approving official review would have detected the fraudulent transactions. Over a 6-month period in 2001, a civilian cardholder made 62 unauthorized transactions totaling $12,832 to pay for repairs to a car and buy groceries, clothing, and various other items for personal use. We were told that the cardholder colluded with the gas station vendor who inflated the prices paid for items and received a kickback. The approving official identified this case by reviewing the cardholder’s August 2001 transactions. The fraud went undetected for several months because the approving official had not reviewed the cardholder’s bills and supporting documentation for over 5 months. The approving official has been relieved of approving official duties and reprimanded. The investigation into the fraud was ongoing at the end of our fieldwork. In our Army-wide data mining, we identified a cardholder transaction for $630 on June 15, 2001, that was coded as being an escort service. In response to our inquiry on this transaction, we were informed that no authorization existed for the transaction and that it was with an escort service in New Jersey. In discussions with provost marshal officials, we were informed that the cardholder had been investigated in February 2002 because of money missing from chapel funds. The provost marshal’s office, after our March 2002 inquiry about the $630 transaction, investigated it and other suspicious charges by the cardholder. The investigators could not get an invoice from the vendor. Their investigations revealed no other fraudulent, improper, or abusive and questionable transactions. They determined that for a short period, the cardholder was also serving as the billing official and that it was during this period that the fraudulent transaction with the escort service occurred. Disciplinary actions included removing the soldier from cardholder duties, reducing his rank, taking one-half month’s pay for 2 months, requiring 45 days extra duty, and ordering repayment of the funds. During June 2001 at Fort Hood, several purchases of prepaid telephone cards and pizza totaling $524 were made and certified for payment by a new approving official who did not realize that the cardholder had separated from the Army in early 2001. In attempting to respond to our request for supporting information for one of the transactions, the approving official recognized that the charges were potentially fraudulent. In the subsequent investigation, an investigator found that the purchase card account was still active in December 2001. This case remained under investigation as of January 2002. In addition to the potentially fraudulent cases identified by our work, we attempted to obtain other examples of potentially fraudulent activity in the Army purchase card program from the Army’ Criminal Investigation Command in Washington, D.C. However, data on the command’s investigations were not available. Further, while Army investigators acknowledge that they have investigated a number of fraud cases, their database on investigations does not allow retrieval of data on investigations involving potentially fraudulent use of purchase cards. Purchase card program officials and Army investigation command officials said that they had no information on the total number of fraud investigation cases throughout the Army that had been completed or were ongoing. Based on our identification of a number of potentially fraudulent cases at the installations that we audited, we believe that the number of cases involving potentially fraudulent transactions could be significant. Without such data, the Army does not know the significance of fraud cases that have been or are being investigated and cannot take corrective actions, to the extent possible, to prevent similar potentially fraudulent cases in the future. Our work identified transactions that were improper, including split purchases and purchases from nonmandatory sources. Improper transactions are those purchases that, although approved by Army personnel and intended for government use, are not permitted by law, regulation, or DOD policy. We identified three types of improper purchases. One type was purchases that did not serve a legitimate government purpose. Another type was split purchases in which the cardholder circumvents cardholder single purchase limits. The Federal Acquisition Regulation guidelines prohibit splitting purchase requirements into more than one transaction to avoid the need to obtain competitive bids on purchases over the $2,500 micropurchase threshold or to circumvent higher single transaction limits for payments on deliverables under requirements contracts. The third type was purchases from an improper source. Various federal laws and regulations require procurement officials to acquire certain products from designated sources such as the Javits-Wagner-O'Day Act (JWOD) vendors. The program created by this act is a mandatory source of supply for all federal entities. It generates jobs and training for Americans who are blind or have other severe disabilities by requiring federal agencies to purchase supplies and services from nonprofit agencies, such as the National Industries for the Blind and the National Institute for the Severely Handicapped. We found several instances of purchases, such as clothing, in which cardholders purchased goods that were not authorized by law or regulations. The Federal Acquisition Regulation provides that the governmentwide commercial purchase card may be used only for purchases that are otherwise authorized by law or regulations. Therefore, a procurement using the purchase card is lawful only if it would be lawful using conventional procurement methods. Under 31 U.S.C. 1301(a), “ppropriations shall only be applied to the objects for which the appropriations were made….” In the absence of specific statutory authority, appropriated funds may only be used to purchase items for official purposes, and may not be used to acquire items for the personal benefit. The improper transactions, as shown in table 9, were identified as part of our review of fiscal year 2001 transactions and related activity. We identified most of them as part of our Army-wide data mining of transactions with questionable vendors although several were identified as part of our work at the five audited installations. The following examples of the improper transactions illustrate of the type of cases included in table 9. We identified purchases of clothing by Soldier, Biological and Chemical Command – Natick that should not have been purchased with appropriated funds. According to 5 U.S.C. 7903, agencies are authorized to purchase protective clothing for employee use if the agency can show that (1) the item is special and not part of the ordinary furnishings that an employee is expected to supply, (2) the item is essential for the safe and successful accomplishment of the agency’s mission, not solely for the employee’s protection, and (3) the employee is engaged in hazardous duty. Further, according to a Comptroller General decision dated March 6, 1984, clothing purchased pursuant to this statute is property of the U.S. government and may only be used for official government business. Thus, clothing purchases, except for rare circumstances in which the purchase meets stringent requirements, is usually considered a personal item for which appropriated funds should not be used. In one transaction, a cardholder had purchased 10 L.L. Bean Gore-Tex parkas at a total cost of about $2,400 for employees who worked outside in cold weather. These parkas were not specialty items, and they were not used solely for official use. The employees were allowed to take the parkas home and wear them in off-duty hours. In another example of clothing for personal use from our Army-wide data mining, several charges for amounts from $330 to $770 were identified at Macy’s and Hecht’s. We were informed that these were for purchases of civilian clothes for enlisted personnel who are serving as assistants to general officers. We were informed by the Director, Purchase Card Unit, Defense Contracting Command Washington, that this appears to be a fairly widespread practice and that the practice is clearly improper and is believed to violate fiscal law. As part of our data mining of Army-wide purchase card transactions, we identified a questionable transaction, which a subsequent investigation determined that a cardholder purchased a Bose radio for $523 to use in his office. The radio was clearly for his personal use in his office; therefore, it should not have been purchased with the Army purchase card. The employee was required to reimburse the U.S. Treasury for the cost of the radio. A broader review of the purchases made by this cardholder’s unit revealed other problems similar to those we identified in our work such as property accountability, not purchasing from mandatory sources, purchases at excessive costs, and missing records. As with our other work, these problems indicate that approving officials were not adequately reviewing cardholder transactions. In our Army-wide data mining we identified charges by two cardholders under one approving official of about $7,600 at the Opryland Hotel in Nashville, Tennessee, in November 2000. In response to our inquiry, we were told that these charges were unexpected and resulted from the Chief Information Officer’s Management Conference at the hotel in August 2000. The charges were unexpected because registration fees were to cover all charges. After a large bill of over $20,000 was eventually reduced to about $7,600, the supervisor instructed the cardholders to pay the additional charges with their purchase cards. However, to pay the $7,600 in charges of less than $2,500 and avoid obvious split purchases, the bill was split into segments and divided between two cardholders. The amounts paid were $2,500 for long- distance phone calls, $934 for phone line hookup, $2,500 for meeting room rental, and $1,715 for audio visual services. However, because separate invoices for the charges do not exist, the officials can neither support the correct amount of any charges nor support that the charges are for purposes permitted by law, regulation, or policy. Fort Hood paid improper and excessive cell phone charges because no one was monitoring them. The information management division’s approving official certified the installation’s consolidated monthly charges for payment for over 1,100 cell phones and over $50,000 monthly time charges without reviewing the usage. While local procedures require the units using the cell phones to verify their own monthly usage, the procedures do not address how and when this is to be done. We found that some units had not routinely verified the charges. Others, who said they usually did verify their charges, could not for the period November 2001 through March 2002 because a change in the phone company’s billing processes did not allow the units to have access to their monthly charges. Without reviewing the charges, the Army has no assurance that charges are proper and not excessive. We reviewed current usage and identified excessive monthly time charges and charges for phones that had no monthly usage. For example, one cell phone user, who had a $79.95 per month plan that allowed 650 minutes of airtime, used 3,400, 2,696, and 1,915 minutes during a 3- month period and incurred time charges of $1,040, $795, and $523. Fort Hood officials told us that improper and excessive charges occur because units do not have the appropriate monthly plan. In the above example, the unit could have reduced its costs to $550, $374, and $200— a 52 percent savings—with an appropriate plan. Fort Hood officials also told us that excessive costs occur because of personal use. They said that when they identified charges for unauthorized personal use, they require employees to reimburse the government for these improper charges. However, without monitoring, use of uneconomical plans and unauthorized personal use would not be identified. Another category of improper transaction is a split purchase, which occurs when a cardholder splits a transaction into more than one segment to avoid the requirement to obtain competitive bids for purchases over the $2,500 micropurchase threshold or to avoid other established credit limits. The Federal Acquisition Regulation prohibits splitting a purchase into more than one transaction to avoid the requirement to obtain competitive bids for purchases over the $2,500 micropurchase threshold or to avoid other established credit limits. Once items exceed the $2,500 threshold, they are to be purchased through a contract in accordance with simplified acquisition procedures, which are more stringent than those for micropurchases. Our analysis of data on purchases at the five installations we audited and our data mining efforts identified numerous occurrences of potential split purchases. In addition, internal auditors at four of the installations identified split purchases as a continuing problem. In some of these instances, the cardholder’s purchases exceeded the $2,500 limit, and the cardholder “split” the purchase into two or more transactions of $2,500 or less. For example, in our Army-wide data mining, we identified a series of split purchases at Fort Stewart, Georgia. An approving official had two cardholders spend $16,000 over a series of days to buy numerous pieces of executive office furniture for the official’s office that was located on the mezzanine of a warehouse. These purchases included elegant desks, chairs, and a conference table. We also identified numerous cases where the Army is making repetitive micropurchases to meet requirements that in total greatly exceed the micropurchase limit. While some repetitive purchases might not clearly be split purchases, the Army is not taking advantage of a mechanism designed to foster lower prices for repetitive acquisitions of similar items over an extended period. Section 13.303-1 of the Federal Acquisition Regulation provides for blanket purchase agreements as a “simplified method of filling anticipated repetitive needs for supplies or services.” Use of a blanket purchase agreement, rather than repetitive, individual micropurchases, could lower per unit prices for the goods or services acquired. Below we discuss four situations in which blanket purchase orders should have been used. At the Soldier, Biological and Chemical Command – Natick, the public works department routinely used the same vendors 35 times to provide and install carpeting, 25 times to provide heating and air conditioning services, and 39 times to provide graphic display services. Although each of the transactions for these vendors was under the micropurchase limit, the total purchases for fiscal year 2001 were about $38,000, $44,000, $77,000, respectively. However, the installation did not have a blanket purchase agreement with the vendors. In these instances the public works department officials had not recognized they needed such a contract, and they agreed to pursue one. We noted that the installation had blanket purchase agreements for other similar circumstances that the internal auditor identified. At the Texas Army National Guard, the Occupational Health Office used purchase cards to pay for routine medical examinations and to buy ergonomic chairs and safety glasses for employees. While the individual cost for purchases were much less than the micropurchase limit, the total annual cost significantly exceeds the limit. For instance, the office paid over $80,000 for about 705 examinations at a dozen clinics and hospitals throughout the state in the first 10 months of fiscal year 2001. The guard also used purchase cards to pay for meals provided troops while they attended mandatory weekend drills or training. While the individual cost for any single guard unit’s training meal would rarely exceed the single-purchase $2,500 limit, the total recurring cost of the meals is one of the guard’s largest annual expenses—over $500,000 in the first 10 months of fiscal year 2001. At Fort Benning, the Dismounted Battlespace Battle Lab, a combat training unit, routinely purchased doors that were destroyed during training exercises to instruct troops how to enter a building that may contain an enemy. The battle lab spent $111,721 in 84 transactions with one vendor to buy doors during a 10-month period in fiscal year 2001, but the unit did not have a blanket purchase agreement. In this case, battle lab officials had refused attempts by the Fort Benning contracting division and purchase card program coordinator to execute an agreement. We found that the battle lab also needed a contract for the numerous computer modems it purchased. Further, in these purchases, the battle lab cardholder’s purchasing pattern was to split purchases to avoid the micropurchase limit of $2,500. We saw numerous instances in which the cardholder made more than one purchase near the limit for the same item over a short period. In a data-mining example, the Army Personnel Command made repetitive buys of interment flag cases from the same vendor. Data show that three purchases, two for $2,250 and one for $1,800, were made on the same day and that in total the command purchased 438 cases for $65,700 in calendar year 2001. The command has agreed that purchases in the future will be on a yearly basis in a competitive contract. Another type of improper purchase occurs when cardholders do not buy from a mandatory procurement source. Various federal laws and regulations require government cardholders to acquire certain products from designated sources. For example, the program created by JWOD generates jobs and training for Americans who are blind or have other severe disabilities by requiring federal agencies to purchase supplies and services furnished by nonprofit agencies, such as the National Industries for the Blind and the National Institute for the Severely Handicapped. Under the Federal Acquisition Regulation, Part 8.7, JWOD is a mandatory source of supply for all entities of the government. Unlike the "Buy American" Act and other rules that have been waived by recent procurement reform measures, JWOD’s mandatory status remains in effect for all purchases, including those under the micropurchase threshold. Most JWOD items are of small value such as office supplies, cleaning products, or medical/surgical supplies that nearly always fall into the micropurchase category. While procurement source was not the primary focus of our work, we noted that cardholders frequently did not purchase from required sources when they should have. For example, we noted numerous purchases of office supplies or other JWOD-supplied products from local vendors when these or substantially similar products were available from the General Services Administration or one of its contractors’ catalogs or Web sites. We also noted that some cardholders did not know their responsibilities or the requirements, despite the fact that these requirements are a primary emphasis during cardholder training programs. For example, some said that they had not heard of JWOD or either of the institutes that cardholders should use. As further evidence of cardholders’ noncompliance with this mandatory source requirement, the Director of Sales for the National Industries for the Blind told us about large decreases in sales of JWOD products at Fort Hood and other Army installations over the past 2 years because cardholders were purchasing from commercial firms rather than buying the mandatory products. The following two examples involving Franklin Covey illustrate the situations we found. In our data mining work, we identified a cardholder at Tooele Army Depot who made 10 purchases for a total of about $11,900 from Franklin Covey, with most of the purchases in August 2001. These purchases were primarily for inserts to day planners, an item that is available from the JWOD catalog. In response to our questions as to why the mandatory source was not used, we were advised that (1) in the past JWOD planners were not used by the self-service store’s customers because they did not include pages with dates for each day and (2) under an interpretation that is now recognized to be in error, the purchases were made from another source under the premise that planners from JWOD did not meet customer needs. We were informed that future purchases of planners would be in one purchase through JWOD. In another case, a unit spent $3,100 over an 18-month period to purchase day planners from Franklin Covey. One item cost $199 and another $250. In contrast, cardholders can buy JWOD day planners for about $40. In fiscal year 2001, the Army made more than 4,700 purchases costing about $792,000 dollars from Franklin Covey. A review of individual purchases, which we did not make, would be required to determine which purchases were for items that should have been from a mandatory source. However, we believe it is likely that many of these purchases could have been for JWOD products. We identified numerous examples of abusive or questionable transactions at each of the five installations we audited. We defined abusive transactions as those that were authorized, but the items purchased were at an excessive cost (e.g., “gold plated”) or for a questionable government need, or both. When abuse occurs, no law or regulation is violated. Rather, abuse occurs when the conduct of a government organization, program, activity, or function falls short of societal expectations of prudent behavior. Often, improper purchases such as those discussed in the previous section are also abusive. Transactions that are both improper and abusive were discussed previously. For example, the executive furniture purchased at Fort Stewart discussed earlier as improper split purchases were also abusive purchases. We believe that this type furniture was not in keeping with the office environment and not justified by the official’s position or grade level. Another example is the excessive cell phone charges at Fort Hood. Questionable transactions are those that appear to be improper or abusive but for which there is insufficient documentation to conclude either. For questionable items, we concluded that cardholders purchased items for which there was not a reasonable and/or documented justification. Questionable purchases often do not easily fit within generic governmentwide guidelines on purchases that are acceptable for the purchase card program. They tend to raise questions about their reasonableness. Many, such as gym quality exercise equipment, are common Army—and DOD—purchases because the Army must provide more than merely a work environment for its soldiers. However, others, like the fine china purchased for the culinary arts team competition discussed below, clearly raise questions about whether they are appropriate purchases. Precisely because these types of purchases tend to raise questions and subject the Army to criticism, they require a higher level of prepurchase review and documentation than other purchases. These types of purchases raise questions that go beyond the confines of the purchase card program. When we examined purchases that raised these types of questions, we usually did not find evidence of prepurchase justification. In attempting to justify whether purchases were acceptable, improper, or abusive, program coordinators, approving officials, and cardholders often provided an after- the-fact rationale for the purchases. We believe that these types of questionable purchases require scrutiny before the purchase, not after. Table 10 identifies examples of these types of purchases. To understand more fully the nature of potentially questionable purchases, we selected six of the examples above to explain in more detail below. Palm Pilots for Pentagon officials. In February 2001, two purchases for a total of 80 Palm Pilots at a total cost of $30,000 were made for the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics. Two questions about this purchase are whether a valid need had been identified for the purchase and whether the urgency of the purchase justified the purchase from a vendor that could deliver immediately but was charging $1,540 more than the lowest competitor. No documentation was available to show how the office had determined that 80 Palm Pilots were a valid government requirement. An e-mail related to the purchase suggested that there was a need “to get enough goodies for everyone.” The documentation also suggested that the items were being ordered for inventory and would be issued to personnel when requested. This does not indicate a predetermined requirement and does not appear to support that the requirement was urgent, as the office determined. Based on the determination of urgency, the price paid was $1,540 more than the lowest competitor’s price so that delivery could be immediate. Culinary arts. At Fort Hood and during our Army-wide data mining effort, we noted several purchases for various culinary arts events. Among the purchases were fine china and crystal from Royal Doulton and Lenox. Other purchases were for accessories such as a rotating lighted ice-carving pedestal. Although participation in culinary arts team events is an approved Army activity, the transactions we examined and inquired about did not have a documentation of the need for the specific items purchased. Although the transactions we examined totaled about $3,800, we believe that the total cost of such purchases Army-wide is far more. We were told that purchases of culinary arts accessories are common throughout the Army. One reason, we were told, is because most installations have culinary arts teams that attend competitions involving the use of expensive accessories and fine crystal and china. Sunglasses for the Golden Nights parachute team. In February 2001, a cardholder purchased 30 pair of sunglasses from Sunglass Hut at about $100 each for a net cost of $2,450—some glasses were returned for credit from a prior purchase—for the Golden Knights. In response to our inquiry about this purchase, we were told that it was not preapproved and that sunglasses were authorized in the common table of allowances when they are needed for training. However, because goggles are worn during parachute jumps, not sunglasses, we believe these purchases were personal use items and thus of questionable government need. The approving official for the transaction believed that the purchase was appropriate. According to the official, the parachute team has 85 members and the purchase was for new members. Tree for Earth Day. The Environmental, Safety and Health Office at the Soldier, Biological and Chemical Command – Natick bought a $2,250 tree to plant in celebration of Earth Day. Although this transaction did not have documented approval prior to purchase or a documented justification for its need, we were told that the tree was purchased for the commanding general to plant among a grove of other trees between two installation buildings during an Earth Day celebration. While planting a tree for Earth Day may be an acceptable expenditure of government funds, we believe the expenditure of over $2,200 for a tree is an excessive cost. Cigars. In April 2001 a cardholder at Schofield Barracks in Hawaii purchased three boxes of Hula Girl Cigars for $300. According to information provided in response to our inquiry about the purchase, the cardholder bought the cigars for gifts to VIPs to be presented by the Commanding General, 25th Infantry Division, Schofield Barracks, during deployment on an exercise in Thailand. The purchaser was an acting protocol officer during a changeover in officers and did not have an approving official reviewing the purchases. No documentation was available from the Army to demonstrate that this purchase was a valid government need. The current Chief of Protocol said that no other cigars had been purchased. Wine. A cardholder purchased two cases of wine on September 20, 2001, from the Naked Mountain Vineyard. After we questioned this purchase, the Army concluded that the cardholder had used the wrong card to purchase the wine, but it had corrected the error to put the purchase in the correct accounting classification. An Army official assured us that the purchase was appropriately authorized by “competent authority in the course of execution of a highly classified, compartmented program.” We were provided no evidence that this purchase was a valid government need. We support the use of a well-controlled purchase card program. It is a valuable tool for streamlining the government’s acquisition processes. However, the Army program is not well controlled. The Army’s weak control environment was the root cause of the problems we saw with purchase card transactions, including the potentially fraudulent, improper, and abusive or questionable purchases. The Army has not provided the aggressive leadership needed to build and maintain an internal control infrastructure that encourages a strong control environment that provides accountability. Such an environment is an important counterbalance to the increased risk of potentially fraudulent and wasteful spending that results from the rapidly expanding use of the purchase card. The Army now spends billions of dollars through a purchase card program for which internal control is not adequate and for which appropriate management oversight does not exist. The Army needs to ensure that installation-level program coordinators, the primary program management officials, have the tools to develop local control systems and oversight activities. Strengthening the control environment will require a renewed focus on, and commitment to, building a robust purchase card infrastructure. The installations and major commands we audited have been responsive to our findings, and they have begun to make changes at their levels. However, the major changes to the Army purchase card program infrastructure that are essential to encouraging and enabling improvements in the overall control environment await action at the Army and DOD management levels. To strengthen the overall control environment and improve internal control for the Army’s purchase card program, we recommend that the Secretary of the Army direct the Deputy Assistant Secretary of the Army (Procurement) and other Army officials as appropriate to improve the overall Army purchase card infrastructure by taking the following actions. Address key control environment issues in Army-wide standard operating procedures. At a minimum, the following key issues should be included in the procedure: controls over the issuance and assessment of ongoing need for cards; cancellation of cards when a cardholder leaves the Army, is reassigned, or no longer has a valid need for the card; span of control of the approving official; and appropriate cardholder spending limits. Help ensure that program coordinators and approving officials have the needed authority, including grade level, to serve as the first line of defense against purchase card fraud, waste, and abuse by issuing a policy directive that specifically addresses their positions, roles, and job descriptions. Policies should also be established that hold these officials accountable for their purchase card program duties through performance expectations and evaluations. Assess the adequacy of human capital resources devoted to the purchase card program, especially for oversight activities, at each management level, and provide needed resources. Develop and implement a program oversight system for program coordinators that includes standard activities and analytical tools to be used in evaluating program results. Develop performance measures and goals to assess the adequacy of internal control activities and the oversight program. Require reviews of existing cardholders and their monthly spending limits to help ensure that only those individuals with valid continuing purchasing requirements possess cards and that the monthly spending limits are appropriate for the expected purchasing activity. These reviews should result in canceling unneeded cards Army-wide and especially at Fort Hood where we found a significant problem. Direct the implementation of specific internal control activities for the purchase card program in an Army-wide standard operating procedure. While a wide range of diverse activities can contribute to a system that provides reasonable assurances that purchases are correct and proper, at a minimum, the following activities should be included in the promulgated procedure: advance approval of purchases, including blanket approval for routine, low dollar purchases; independent receiving and acceptance of goods and services; independent review by an approving official of the cardholder’s monthly statements and supporting documentation; approving official reconciling the charges on the monthly statement with invoices and other supporting documentation and forwarding the reconciled statement to the designated disbursing office for payment as required by governmentwide and DOD regulations; and cardholders obtaining and retaining invoices that support their purchases and provide the basis for reconciling cardholder statements. Develop and implement procedures and checklists for approving officials to use in the monthly review of cardholders’ transactions. These procedures and checklists should specify the type and extent of review that is expected and the required review documentation. Reiterate records retention policy for purchase card transaction files and require that compliance with record retention policy be assessed during the program coordinator’s annual review of each approving official. Require the development and implementation of coordination and reporting procedures to help ensure that accountable property bought with the purchase card is brought under appropriate control. Require additional prior documented justification and approval of those planned purchases that are “questionable”—that fall outside the normal procurements of the cardholder in terms of either dollar amount or type of purchase. Analyze the procurements of continuing requirements through micropurchases and require the use of appropriate contracting processes to help ensure that such purchases are acquired at best prices. Develop an Army-wide database on known fraud cases that can be used to identify potential deficiencies in existing internal control and to develop and implement additional control activities, if warranted or justified. Develop and implement an Army-wide data mining, analysis, and investigation function to supplement other oversight activities. This function should include providing oversight results and alerts to major command and installations when warranted. We also recommend that the Under Secretary of Defense (Comptroller) direct the Charge Card Task Force to assess the above recommendations, and to the extent applicable, incorporate them into its recommendations to improve purchase card policies and procedures throughout DOD. In written comments on a draft of this report, which are reprinted in appendix III, DOD concurred with our recommendations. Although concurring with our recommendation for an Army-wide standard operating procedure directing the implementation of specific internal control activities, DOD took exception to broad application of advance approval of purchases and independent receiving and acceptance of goods and services in an Army-wide standard operating procedure. DOD said that broad application of those activities would add costs to the process without a comparable reduction in risk. However, DOD recognized the applicability of these activities in some circumstances and commented that the Army standard operating procedure will (1) include a list of items requiring advance approval and (2) require advance approval for a category of items that fall outside the “common sense” rule. We continue to believe that both advance approval and independent receiving are important internal control activities and have applicability to the Army purchase card program, including many micropurchases. We recognize that not all purchases require specific advance approval and some small dollar and other purchases may not lend themselves to documented independent receiving. Therefore, the Army-wide standard operating procedure should (1) discuss the criteria for determining when these control activities are applicable and (2) articulate guidelines for implementing them. As agreed with your offices, unless you announce its contents earlier, we will not distribute this report until 30 days from its date. At that time, we will send copies to interested congressional committees; the Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Under Secretary of Defense (Comptroller); the Secretary of the Army; the Assistant Secretary of the Army for Acquisition Logistics and Technology; the Deputy Assistant Secretary of the Army (Policy and Procurement); the Director of the Army Contracting Agency; the Director of the Defense Finance and Accounting Service; and the Director of the Office of Management and Budget. We will make copies available to others upon request. Please contact Gregory D. Kutz at (202) 512-9505 or kutzg@gao.gov, Ronald D. Malfi at (202) 512-7420 or malfir@gao.gov, or David Childress at childressj@gao.gov if you or your staffs have any questions concerning this report. Major contributors to this report are acknowledged in appendix IV. We audited the adequacy of the Army’s internal control over authorization, purchasing, and payment of fiscal year 2001 purchase card transactions. The Army’s purchase card program is the largest of the services, with the most cardholders, transactions, and dollars spent. We are also performing audits of the other services and will report the results of those audits separately. For the Army, we performed work in the major commands that have the largest purchase card programs, accounting in fiscal year 2001 for about 66 percent of total Army purchases and about 62 percent of total Army transactions. We conducted detailed work at the following major commands and installations. At the Army and major command levels we evaluated the policies and procedures used to guide the purchase card program, and we evaluated the activities they engage in to oversee the program. At the installation level, we used a case study approach to evaluate the local purchase card program, and our work there consisted of three major segments. We evaluated the overall control environment, including the adequacy of the Army’s policies and procedures. We evaluated the implementation of key internal control activities at the installations. Finally, we identified evidence of potentially fraudulent, improper, or abusive or questionable transactions at each audited installation and conducted limited follow-up. To assess the control environment, we examined the installations’ policies and procedures and oversight activities. To assess their adequacy, we used as our primary criteria applicable laws and regulations; our Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999); and our Internal Control Standards: Internal Control Management and Evaluation Tool (GAO-01-1008G, August 2001). To assess the management control environment, we applied the fundamental concepts and standards in our internal control standards to the practices followed by management. To test the implementation of specific control activities at the five installations we audited, we selected a stratified random sample probability of 150 purchase card transactions from the population of transactions paid from October 1, 2000, through July 31, 2001, for each of the installations. With these statistically valid probability samples, each transaction in the five installations’ populations had a nonzero probability of being included, and that probability could be computed for any transaction. Within each installation we stratified the population of transactions by the dollar value of the transaction and by whether the transaction was likely to be for a purchase of computer-related equipment. Each sample transaction in an installation was subsequently weighted in the analysis to account statistically for all the transactions in the population of that installation, including those that were not selected. For each transaction sampled, we tested whether key internal control activities had been performed. For each control activity tested, we projected an estimate of the percent of transactions for which the control activity was not performed, for each installation. Because we followed a probability procedure based on random selections of transactions, our sample for each installation is only one of a large number of samples that we might have drawn. Since each sample could have produced different estimates, we express our confidence in the precision of our particular samples’ results (that is, the sampling error) as 95 percent confidence intervals. These are intervals that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true (unknown) values in the study populations. Although we projected the results of our samples to the populations of transactions at the respective installations, the results cannot be projected to the population of Army transactions or installations. For the sampled transactions that were for accountable items, we tested whether they had been recorded in the installation’s property book records and whether the installation could demonstrate the item’s existence. We did not project the results of this test because some transactions contained so many accountable items—as many as 500—that we elected to perform a nonstatistical analysis of the degree to which these items were recorded in property books. In addition to our review of a statistical sample of transactions at the five audited installations, we also identified other selected transactions at the five locations and throughout the Army’s fiscal year 2001 purchase card transactions to determine if indications exist of potentially fraudulent, improper, and abusive or questionable transactions. Our data mining included identifying transactions with certain vendors that had a more likely chance of selling items that would be unauthorized or that would be personal items. For a small number of these transactions at each of the five installations and from the Army-wide database, we requested limited documentation, usually the supporting invoice, that could provide additional indications as to whether the transactions were potentially fraudulent, improper, and abusive or questionable. If the additional documentation indicated that the transactions were proper and valid, we did not further pursue documentation on those transactions. If the additional documentation was not provided or if it indicated further issues related to the transactions, we obtained and reviewed additional documentation or information about these transactions. While we identified some potentially fraudulent, improper, and abusive or questionable transactions, our work was not designed to identify, and we cannot determine, the extent of potentially fraudulent, improper, or abusive transactions. Because of the large number of transactions that met these criteria, we did not look at all potential abuses of the purchase card. For those potentially fraudulent transactions that had been or were being investigated at the five audited installations, we discussed the cases with the investigators and/or obtained records and reports on the investigations. We also interviewed purchase card officials and Army criminal investigators to identify other Army purchase card fraud cases that had been or were being investigated. We did not audit the Defense Finance and Accounting Service’s purchase card payment process. We also did not audit electronic data processing controls used in processing purchase card transactions. The installations received paper monthly bills containing the charges for their purchases and used manual processes for much of the period we audited, which reduced the importance of electronic data processing controls. We briefed DOD managers, including officials in DOD’s Purchase Card Joint Program Management Office, major command purchase card program coordinators, and purchase card program officials at the installations we audited on the details of our review, including our objectives, scope, and methodology and our findings. Written comments on a draft of this report were received from the Acting Director of the Army Contracting Agency and have been reprinted in appendix III. We conducted our audit work from June 2001 through April 2002 in accordance with generally accepted government auditing standards, and we performed our investigative work in accordance with standards prescribed by the President’s Council on Integrity and Efficiency, as adapted for GAO’s work. The Army’s purchase card program is part of the Governmentwide Commercial Purchase Card Program, which was established to streamline federal agency acquisition processes by providing a low-cost, efficient vehicle for obtaining goods and services directly from vendors. It was intended to shorten the time between need and acquisition while providing management with monthly reports and a thorough audit trail of all purchases. Under a General Services Administration blanket contract, the Army has contracted with U.S. Bank for its purchase card services. DOD reported that it used purchase cards for about 10.7 million transactions, at a cost of over $6.1 billion, during fiscal year 2001. The Army’s reported purchase card activity totaled about 4.4 million transactions, valued at $2.4 billion, during fiscal year 2001. This represented about 40 percent of DOD’s activity for fiscal year 2001. The Army’s purchase card transactions were made with Visa cards issued to over 109,000 civilian and military personnel. DOD has mandated the use of the purchase card for all purchases at or below $2,500, and it has authorized the use of the card to pay for larger purchases. DOD has experienced significant growth in the program since its inception and now estimates that approximately 95 percent of its micropurchase transactions in fiscal year 2001 were made by purchase card. The purchase card can be used for both micropurchases and payment of other purchases. Although most cardholders have limits of $2,500, some have limits of $25,000 or higher. The Federal Acquisition Regulation, Part 13, “Simplified Acquisition Procedures,” establishes criteria for using purchase cards to place orders and make payments. DOD and the Army have supplements to this regulation that contain sections on simplified acquisition procedures. U.S. Treasury regulations govern purchase card payment certification, processing, and disbursement. DOD’s Purchase Card Joint Program Management Office, which is in the Office of the Assistant Secretary of the Army for Acquisition Logistics and Technology, has issued departmentwide guidance related to the use of purchase cards. However, each service has its own policies and procedures governing the purchase card program. Within the Army, the overall management responsibility for the purchase card program is under the cognizance of the agency program coordinator within the Purchase Card Joint Program Management Office. However, the function of this agency program coordinator and the office is limited and most of the major management responsibility lies with the contracting offices in the major commands and contracting offices at the installations. At the installation, the program coordinator is responsible for administering and overseeing the purchase card program within his or her designated span of control and serving as the communication link between the Army unit and the purchase card-issuing bank. The other key personnel in the purchase card program are the approving officials and the cardholders. They are responsible for implementing internal controls to ensure that transactions are appropriate. Figure 2 illustrates the general design of the purchase card processes for the Army. The overall process begins with the cardholder ordering or purchasing a good or service. It ends with payment of the bill by the Defense Finance and Accounting System. A purchase cardholder is the Army military service member or civilian employee who has been issued a purchase card that bears the cardholder’s name and the assigned account number. Before the card is issued, the cardholder is to receive training on purchase card policies and activities. Each cardholder has an established daily and monthly credit limit and is designated to make purchases at selected types of vendors. The cardholder is expected to safeguard the purchase card as if it were cash. Purchase cardholders are delegated limited contracting officer-ordering responsibilities, but they do not negotiate or manage contracts. Cardholders use purchase cards to order goods and services for their units as well as their customers. Cardholders may pick up items ordered directly from the vendor or request that items be shipped directly to receiving locations or end users. The approving official is responsible for providing reasonable assurance that all purchases made by the cardholders within his or her cognizance were appropriate and that the charges are accurate. The approving official is supposed to resolve all questionable purchases with the cardholder before certifying the bill for payment. In the event an unauthorized purchase is detected, the approving official is supposed to notify the program coordinator and other appropriate personnel within the command in accordance with the command procedures. After reviewing the monthly statement, the approving official is to certify the monthly invoice and send it to the Defense Finance and Accounting Service for payment. The purchase card payment process begins with receipt of the monthly purchase card billing statements from the bank. Section 933 of the National Defense Authorization Act for Fiscal Year 2000, Public Law 106-65, requires DOD to issue regulations that ensure that purchase cardholders and each official with authority to authorize expenditures charged to the purchase card reconcile charges with receipts and other supporting documentation. Army memos and regulations provide that upon receipt of the individual cardholder statement, the cardholder is to reconcile the transactions appearing on the statement by verifying their accuracy to the transactions appearing on the statement and notify the approving official in writing of any discrepancies in the statement. Before the credit card bill is paid the approving official is responsible for (1) providing reasonable assurance that all purchases made by the cardholders within his or her cognizance are appropriate and that the charges are accurate and (2) the timely certification of the monthly billing statement for payment by the Defense Finance and Accounting Service. The approving official must review and certify for payment the monthly billing statement, which is a summary invoice of all transactions of the cardholders under the approving official’s purview. Upon receipt of the certified monthly purchase card summary statement, a Defense Finance and Accounting Service vendor payment clerk is to (1) review the statement and supporting documents to confirm that the prompt-payment certification form has been properly completed and (2) subject it to automated and manual validations. The Defense Finance and Accounting Service effectively serves as a payment processing service and relies on the approving official certification of the monthly payment as support to make the payment. The Defense Finance and Accounting Service vendor payment system then batches all of the certified purchase card payments for that day and generates a tape for a single payment to U.S. Bank by electronic funds transfer. Staff making key contributions to this report were Wendy Ahmed, William B. Bates, Bertram J. Berlin, James D. Berry, Jr., Johnny R. Bowen, Francine M. DelVecchio, Ronald M. Haun, James P. Haynes, Kenneth M. Hill, Fred Jimenez, Mitchell B. Karpman, Richard A. Larsen, Christie M. Mackie, Judy K. Pagano, John J. Ryan, and Sidney H. Schwartz. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
The Army's purchase card program--the largest within the Defense Department--offers significant benefits, but weak internal controls have left the Army vulnerable to fraudulent, improper, and abusive purchases. The Army has yet to issue servicewide regulations or operating procedures, instead relying on ad hoc memoranda and other informal guidance. The Army also does a poor job of overseeing the purchase card program. The Army lacks the infrastructure--guidance and human capital--needed for effective program oversight. GAO identified several improper transactions involving clothing, food, and other items. GAO also identified improper purchases in which cardholders made a large number of purchases of similar items to circumvent the mandated limit of $2,500 for a single purchase.
It is perfectly legal for U.S. taxpayers to hold money offshore. It is illegal, however, for a taxpayer to not disclose substantial offshore holdings, to not report income earned in the United States and “hidden” through offshore arrangements, and to not report income earned offshore to IRS on the taxpayer’s tax return. If U.S. taxpayers own an offshore business such as a foreign corporation, they are required to disclose that holding to IRS on their tax return. When applied to abusive transactions, IRS generally uses the term “offshore” to mean a country or jurisdiction that offers financial secrecy laws in an effort to attract investment from outside its borders. When referring to a financial institution, “offshore” refers to a financial institution that primarily offers its services to persons domiciled outside the jurisdiction of the country in which the financial institution is organized. Abusive offshore schemes are often accomplished through the use of limited liability corporations (LLC), limited liability partnerships (LLP), international business corporations (IBC), and trusts, foreign financial accounts, debit or credit cards, and other similar instruments. According to IRS, the schemes can be complex, often involving multiple layers and multiple transactions used to hide the true nature and ownership of the assets or income that the taxpayer is attempting to hide from IRS. IRS has multiple programs and techniques used to select potentially noncompliant tax returns for examination. One source is a computer model designed to predict returns that, if audited, would be most likely to result in additional taxes owed. Other sources that prompt an examination include referrals from inside or outside IRS, information from third parties, and indications of fraud or noncompliance from other audits. Once IRS has identified a return for an examination, the classification process begins. Classification is the process of determining whether a return should be selected for examination, what issues should be examined, and how the examination should be conducted. IRS guidance on classification states that classification should be conducted by an experienced examiner. Examination is the accumulation of evidence for evaluating the accuracy of the taxpayer’s tax return. Examiners gather facts to correctly determine a taxpayer’s tax liability. Evidence can include the taxpayer’s testimony and books and records as well as the examiner’s own observations and documents from third parties. Methods for accumulation of evidence include analytical tests, documentation, inquiry, inspection, observation, and testing. IRS procedures call for examiners to pursue an examination to the point where a reasonable determination of correct tax liability can be made. In turn, examiners prepare audit reports, which should contain all information necessary to ensure a clear understanding of the adjustment, if any, and document how the tax liability was computed. These reports serve as the basis for assessment actions. An assessment records the taxpayer’s liability due. IRS examinations are generally of one of three types—correspondence, office, or field. The simplest examinations usually cover one to two tax issues handled by a lower-graded examiner through correspondence. More complex examinations are done by meeting with taxpayers or their representatives in IRS offices. The most complex examinations are done through revenue agent field visits to taxpayer locations. Only about 16 percent of all IRS examinations from 2002 through 2005 were conducted through field examinations, but 98 percent of offshore examinations were of this type. About three-fourths of nonoffshore examinations are handled through correspondence. IRS does not classify every return that is filed, nor does it examine every case file that is classified, even if IRS determines that examining the tax return would likely yield an assessment of additional taxes owed. Figure 1 provides a notional representation of the process of taking the over 130 million individual income tax returns that were filed in fiscal year 2004 through the steps that lead to audits of a much smaller number of those returns. In most cases, the law gives IRS 3 years from the date a taxpayer files a tax return to complete an examination and make an assessment of any additional tax. For example, if a taxpayer filed a tax return on April 15, 2000, IRS had until April 15, 2003, to finish any examination of that return and make an assessment of additional taxes owed by the taxpayer. This statute of limitations for assessments is in effect for all examinations with exceptions allowing longer periods for certain taxpayer actions or omissions such as fraud or substantial understatement of gross income (in excess of 25 percent of the amount of gross income stated on the return). Taxpayers may also waive the 3-year assessment limitation through written consent. In general, it takes longer for IRS to identify and examine tax returns involving abusive offshore transactions than IRS needs in nonoffshore cases because of the added complexity of examining offshore transactions. Where IRS is able to complete examinations involving abusive offshore transactions, they generally result in larger assessments than other types of examinations. IRS has policies in place to avoid violating the statute of limitations, and IRS enforcement personnel told us that these policies, in conjunction with the longer time needed to complete offshore examinations, mean some cases are never opened in the first place while others are not fully worked because the time allowed under the current statute is running out. As a result, they said, overall assessments for offshore cases are lower than they would be if IRS had more time to work these cases. IRS officials told us that cases involving offshore tax evasion present special, time-consuming challenges that other types of cases do not. Tax evasion, both domestic and offshore, often involves schemes with many layers of deception. IRS officials told us that for domestic tax evasion, revenue agents are able to issue summonses to domestic financial institutions to uncover the layers of deception the taxpayer created to hide the source and existence of the funds. In offshore cases, IRS generally does not have summons power over offshore financial institutions, and is often unable to determine the owner of an offshore account or business, or determine the source of the funds. Even in cases where IRS is able to determine information about offshore funds, an IRS manager told us that this process of discovery is much more time consuming than for nonoffshore cases. Unlike much nonoffshore tax evasion, most possible offshore tax evasion cases are not discovered through IRS’s computerized analysis of tax returns, but rather through investigations of promoters of offshore schemes. Officials told us that several divisions of IRS forward leads on the promoters of offshore schemes they discover to revenue agents, who develop the cases in order to discover the extent of the promoter’s use of offshore schemes. This process takes far longer than computer analysis- based methods of identifying potential noncompliance. After developing information that a promoter of offshore schemes illegally sold schemes to help taxpayers avoid their tax liability, IRS can refer that information to the Department of Justice, which can then file a complaint in the United States District Court requesting the court to issue an injunction against the promoter. In some cases, the injunction will compel the promoter to disclose the clients who purchased the scheme. IRS officials told us that it can take years to get a client list from a promoter and, even with a client list, there is still much work that IRS needs to do before the clients of the offshore schemes can be audited. For example, IRS officials told us that they may only get limited information about the clients of offshore promoters, and often that information is limited to a name and perhaps the city and state where the client lives, so considerable time may be spent finding the individuals listed by the promoter. Time spent developing information on a return before putting it into the queue for examination shortens the time available to close the examination before the 3-year civil statute of limitations expires. Table 1 compares the median number of days spent in development for offshore and nonoffshore examinations from 2002 to 2005. As shown in the table, the median offshore case took 184 more calendar days than the median nonoffshore case to move from filing to examination. Comparing just field examinations, which constituted over 98 percent of offshore examinations in fiscal years 2002 through 2005, the difference in median development time was 96 days. Some examinations lead to additional examinations of the same taxpayer’s returns, such as when a revenue agent identifies noncompliance on one return and then reviews prior year returns looking for the same problem, or when a taxpayer files a new return while an examination is underway. To avoid overstating development time, this comparison includes only the number of days between the start of the examination and the filing date of the last return filed before the examination began. Once offshore cases are developed and moved into examination, the examinations take longer than nonoffshore cases. Considering all types of examinations together, the median offshore examination took 90 more days than the median nonoffshore examination. Considering field examinations alone, the median offshore field examination was 70 days longer than the median nonoffshore field examination, as shown in table 2. IRS officials told us that this is due to examination complexity and the difficulty of identifying and obtaining information from foreign sources. The total time that elapses between a return being filed and IRS’s closing of the examination of that return is referred to as total cycle time and provides another type of comparison between offshore and nonoffshore cases. As shown in table 3, the median offshore examination took almost 500 more calendar days overall to close than the median nonoffshore examination, a 126 percent difference. The median offshore case took 82 percent of the statute time versus 36 percent for nonoffshore cases. Considering just field examinations, the median cycle times for offshore and nonoffshore examinations were closer in length, but the median offshore examination was still 194 days longer, a difference of 28 percent. About half of all offshore examinations resulted in a recommended assessment of additional taxes due compared to approximately 70 percent of nonoffshore examinations. While less frequent, assessments from all types of offshore examinations—correspondence, office and field—had a median that was nearly 3 times larger than from nonoffshore examinations. Considering just field examinations, recommended assessments from offshore examinations also had a median that was much larger than nonoffshore examinations, though by a smaller margin, as shown in table 4. While yielding larger assessments, the greater amount of time spent on offshore examinations means that their yield per hour of direct examination time is lower. Considering all types of examinations together, including both those that resulted in an assessment and those that did not, offshore examinations yielded less per hour of direct examination time than nonoffshore examinations because the number of hours spent on those examinations is nearly 4 times longer, on average. From 2002 to 2005, IRS examiners spent an average of 46 hours on all types of offshore examinations, compared to an average of only 12 hours for all types of nonoffshore examinations. Considering only field examinations, average hours per examination were 47 for offshore examinations versus 62 for nonoffshore examinations, and the difference in dollars per hour of direct examination time is greater. IRS has strict policies to prevent examinations from going past the statute of limitations because if an assessment is not made within 3 years, the statute of limitations bars IRS from making any assessment at all. Such instances mean the loss of revenue to IRS and inefficient use of IRS examination resources. IRS policies specify that statute expiration dates for all tax returns be properly determined, that all records be annotated with these dates, and that the cases be closely monitored to prevent accidentally running out of time. Revenue agents and managers told us that IRS strongly emphasizes the importance of keeping track of these dates and avoiding allowing an examination to go past the statute date. While the 3-year statute of limitations applies in most cases, some exceptions exist under current law. For example, an assessment may be made after the 3-year point if the tax return is false or fraudulent or if there is a sufficiently large omission of gross income. Taxpayers may also agree to waive their statute rights. In the rare cases where IRS personnel allow an examination to go past the statute without meeting one of the current exceptions to the statute (a “barred statute”), the responsible agent and his or her manager must prepare a Barred Statute Report and face possible disciplinary action because of the examination time spent with no possibility of making an assessment. IRS data for fiscal years 2005 and 2006 showed 39 barred statutes associated with examinations where a manager made an initial determination to recommend a disciplinary action. As shown in table 6, most of these barred statutes ultimately resulted in some type of disciplinary action. IRS has created guidance for continuing offshore examinations past the 3- year point. This guidance permits agents to request permission to carry on the examination past the 3-year point based on their judgment that, given additional time, they will be able to ultimately prove that the examination meets one of the following three conditions:1. The return is false or fraudulent. IRS defines false or fraudulent as the preparation and filing of false income tax returns by claiming inflated personal or business expenses, false deductions, unallowable credits, or excessive exemptions. 2. There is a sufficiently large omission of gross income (in excess of 25 percent of the amount of gross income stated on the return) under IRC 6501(e), in which case the tax may be assessed at any time within 6 years after the return is filed. 3. The taxpayer failed to notify the Secretary of the Treasury of certain foreign transfers under IRC 6501(c)(8), in which case the statute of limitations is 3 years from the date IRS receives the required information. A conclusion to continue an examination beyond the statute must be approved in writing by IRS managers, based on the revenue agent’s documentation of the rationale and calculations to support this conclusion. In addition, IRS must have made a timely and proper request to the taxpayer to obtain a consent agreement to extend the statute. The taxpayer’s refusal to extend the statute or lack of response must be documented. If this guidance is followed, no disciplinary action will be taken against the IRS managers and agents if the examination ultimately does not prove to meet one of the three conditions for making an assessment after 3 years. The IRS guidance allowing some examinations to go past the normal statute period based on the revenue agent’s judgment that an assessment will be possible after the 3-year point recognizes the limited time available to agents to finalize case-specific facts when the 3-year statute is about to expire. The IRS guidance also notes that the Credit Card Summons project examinations are generally likely to involve unreported income or fraud as well as failure to file information returns reporting foreign transfers. The guidance also states that other offshore examinations share many of the same challenges as Credit Card Summons project examinations including complex examinations and securing documents located outside the United States. IRS managers told us that this procedure for continuing examinations beyond the statute is cumbersome, time-consuming, and some agents are reluctant to use the procedure because of concerns about barred statutes. Revenue agents told us that this reluctance stems from the culture of IRS examiners where agents are instructed from the time they are hired to never let an examination go past the statute of limitations for any reason. Despite subsequent assurances from IRS guidance, however, revenue agents told us that ingrained reluctance to letting the statute of limitations pass is still paramount. All of the examinations allowed to extend past the statute date under this guidance represent a gamble on the part of IRS that the examination will ultimately meet one of the exceptions to the statute and an assessment will be allowed under the law. IRS records show that 1,942 offshore examinations were taken past the 3-year statute period from fiscal years 2002 through 2005. IRS ultimately made assessments on 63 percent of these examinations and these assessments were significantly higher than assessments from all other types of examinations, with a median assessment of about $17,500 versus about $5,800 from offshore examinations that were closed within the 3-year statute of limitations and $2,900 from all nonoffshore examinations closed within 3 years. IRS databases do not allow systematic analysis of the approximately 700 examinations that did not result in an assessment, so we do not know if these were accurate returns or if the discovered tax evasion just did not rise to the level of fraud or substantial understatement of income. For those examinations that closed with an assessment, longer examinations did not change the median assessment amount significantly for nonoffshore examinations. On the other hand, offshore examinations produced much larger median assessments than both shorter offshore examinations and all nonoffshore examinations when the examinations themselves took 3 years or more, as shown in figure 2. A similar relationship is found for field examinations alone, as shown in figure 3. Similarly, our analysis of assessment dollars generated per hour of examination time (including examinations both with and without assessments) showed that the yield increased markedly for offshore examinations that take more than 3 years. While average assessment dollars per hour of direct offshore examination time are about half of the average for nonoffshore examinations, the reverse is the case for examinations that go over three years—$6,458 per hour for offshore examinations compared to $3,432 per hour for nonoffshore examinations. The comparison is nearly the same for field examinations alone—$6,465 per hour for offshore field examinations and $3,454 per hour for nonoffshore field examinations. Revenue agents and managers told us that some developed case files are not opened for examination because insufficient time remains under the statute to make the examination worthwhile. They said that managers and agents have leeway in deciding which examinations to work because there are usually more developed case files waiting for agents than there are agents to work them. IRS wants agents to work examinations with a good likelihood of leading to meaningful assessments; managers told us they look for examinations that have both apparent noncompliance and sufficient time remaining within the statute to fully develop the apparent issues. Revenue agents and IRS managers told us that, in order to avoid violating the statute, they will often choose case files to examine with more time remaining under the 3-year statute of limitations over case files with less time remaining but with more likely or more substantial possible assessments. As a result, they explained that not all case files in the unassigned inventory of case files developed for examination are selected for examination and many case files are “surveyed,” or closed without examination. Two IRS policies could contribute to closing a developed offshore case without an examination. One of these policies requires sorting the unassigned inventory to identify the areas most in need of examination. This policy includes statute year and statute date among the attributes used in sorting unassigned inventory. A second policy requires that an examiner not begin an examination or requisition any return for audit without management approval if fewer than 12 months remain on the statutory period for assessment. As described earlier, offshore examinations typically require more time to develop than nonoffshore examinations, and as a result, offshore examinations in the queue for examination would typically be nearer the end of the assessment period than nonoffshore examinations. IRS managers explained that this attribute of offshore examinations can lead to leaving offshore cases in the queue until the statute period ends and then closing the case without an examination. Agents and managers also said that they often choose to end an ongoing examination nearing the end of the 3-year assessment period without making a complete assessment rather than risk taking the examination past the statute period, losing revenue, and facing disciplinary action. IRS agents and managers told us that they face difficult choices as an examination nears the end of the 3-year assessment period and the examination is incomplete. On the one hand, the examination can be discontinued. This choice is the safest for individual IRS agents and managers because it avoids the possibility of a Barred Statute Report and disciplinary actions. However, this choice also results in an assessment that does not accurately reflect the extent of a taxpayer’s compliance or noncompliance with tax laws because the examination is incomplete. Continuing the examination can result in an accurate assessment, but only if the examination demonstrates one or more of the exceptions to the statute described earlier. If the examination does not ultimately demonstrate fraud or another basis for an exception, IRS managers and agents wasted IRS resources because they are barred from making an assessment. Revenue agents told us that they believed that in some cases there is “money being left on the table” in the form of unexamined issues that could have led to assessments if there had been sufficient time to examine them. Even where there is sufficient time to work an examination, only a few years where a taxpayer was using a particular scheme may be open to examination and the early years of a scheme may be past their statute date before the examination even begins. For example, if IRS is examining a taxpayer’s 2005 tax return and discovers a significant understatement in the income that the taxpayer reported, the agent can examine some of the taxpayer’s previous returns, but unless the revenue agent and manager suspect fraud, in which case there is not a statute of limitations, IRS must abide by the 3-year statute of limitations on assessments and not examine some prior years that taxpayers held money offshore illegally. A senior IRS official told us that this is a particularly significant problem because it is often in the first years of an offshore scheme where the taxpayer moves the most money offshore and the most egregious tax evasion takes place, so IRS is missing out on significant assessments by not being able to look back at previous tax returns. IRS revenue agents are not able to accurately estimate likely possible assessments for case files or tax years that are unexamined. Similarly, in cases where an examination is started and subsequently closed without some issues being examined due to the statute of limitations, it is not possible to estimate the likely assessment from unexamined issues. As mentioned earlier, however, we found that 1,942 offshore examinations were allowed, either by IRS decision or by a voluntary statute extension signed by the taxpayer under examination, to exceed the 3-year statute of limitations. Of those, more than 700 were closed without an additional tax assessment. IRS officials told us that many of the offshore examinations that go past the 3-year statute of limitations are very difficult to work due to complex financial arrangements and that even with significantly more time, some particularly complex and well-hidden offshore schemes would remain very difficult to uncover. IRS data did not show the reasons that the 700 offshore examinations that went past the 3-year statute of limitations were closed without an assessment. Some offshore examinations exhibit compliance problems similar to those where Congress granted a change or exception to the statute in the past. Offshore examinations take longer than nonoffshore examinations for IRS to develop and examine for reasons such as technical complexity and the difficulty of obtaining information from foreign sources, and as a result, IRS may not complete assessments of all taxes owed. These problems are similar to problems giving rise to other changes and exceptions to the statute at both the federal and state levels over the years. These changes and exceptions provide precedent for changing the statute for offshore examinations. Offshore examinations present IRS with various enforcement problems. As discussed above, offshore examinations take longer to develop and examine. IRS officials told us that this is due to the examinations’ complexity and difficulty in identifying and obtaining information from foreign sources. Agents and managers also said that they often choose to end an ongoing examination nearing the end of the 3-year assessment period without making a complete assessment rather than risk taking the examination past the statute period, losing revenue, and facing disciplinary actions. Further, agents and managers explained that some taxpayers or their representatives employ dilatory, uncooperative tactics when dealing with IRS. In addition, we previously testified that the use of offshore schemes can also pose a threat to the integrity and fairness of our tax system by adversely affecting voluntary compliance if honest taxpayers believe that significant numbers of individuals are not paying their fair share of the tax burden. We reviewed 12 IRS offshore case files and found examples of (1) technical complexity, (2) difficulty in identifying and obtaining information from foreign sources, and (3) taxpayers or their representatives employing dilatory, uncooperative tactics when dealing with IRS. We also found a wide variety of offshore examinations, from very simple examinations to much more complex examinations that had been under examination for years. In order to obtain illustrative examples of offshore examinations, we reviewed examinations that took a shorter than average number of days to complete, about an average number of days, and a longer than average number of days. We reviewed case files in two locations and our reviews included both completed examinations and examinations still in progress. These examinations included some that had no changes to the taxpayer. The two examinations described below include one that took a relatively low number of days and one that took a longer than average number of days. In the first examination, the taxpayer was identified as holding an offshore credit card in a country considered to be a tax haven. The taxpayer maintained that he did not have an offshore credit card. IRS used a summons to obtain records of a domestic rental car transaction that would identify the holder of the offshore credit card. While the name shown on the rental car records was similar to the taxpayer’s name, it was not the taxpayer’s name. After reviewing the rental car records, the revenue agent concluded that the taxpayer was not the holder of the offshore credit card. The examination had no other issues and resulted in no change in the amount of tax owed by the taxpayer. In conducting this examination, the revenue agent sent 4 pieces of correspondence to the taxpayer, conducted 1 interview with taxpayer, notified the taxpayer of third-party contact, and used 1 summons to obtain domestic rental car records; the summons was returned 33 days after it was issued. In the second examination, the taxpayer had a number of businesses in the United States and in other countries, including at least one business in a tax haven country. It appeared that some of the taxpayer’s businesses paid consulting fees to other businesses the taxpayer owned, and consulting fees were paid into an offshore account in a tax haven country through which the taxpayer received funds via a credit card. IRS found it difficult to determine how much money was in the taxpayer’s offshore tax haven business and how the money got there. The money in that business, IRS told us, is the lynchpin of the entire examination, which was still underway at the time of our review. During the 4 years that the examination had been underway, IRS opened examinations on the taxpayer’s spouse and on other businesses in other tax years. IRS has not been able to find where some of the money is going, although officials are confident that more is being hidden as the taxpayer had other businesses that made payments to the business in the offshore tax haven country. Over the 4 years of this examination, there have been at least 5 powers of attorney, 20 summonses, 39 contacts with the taxpayer’s power of attorney, 23 document requests, 5 missed appointments by taxpayer or taxpayer’s representative, 1 statute extension, 2 interview requests denied, 5 meetings with taxpayer’s representative, 4 postponed appointments, 4 third-party contacts, and 2 occasions on which the taxpayer refused to supply information. The scheme began, as far as IRS can tell, in the late-1990s, but examinations of some early years of the taxpayer’s scheme were statutorily barred. This means that, when the examination eventually closes, IRS will not be able to assess any additional taxes on at least some tax years that IRS agents found the taxpayer was holding money offshore unless they determine that fraud was committed. Enforcement problems exhibited in the 12 cases we reviewed are similar to enforcement problems justifying changes and exceptions to the statute at both the federal and state levels over the years. For example, the statute was recently changed at both the federal and state levels to address specific compliance problems, such as dilatory tactics on the part of taxpayers and the use of technically complex transactions. The following details on legislative actions illustrate instances where changes and exceptions to the statute were granted at both the federal and state levels because of enforcement problems similar to those exhibited by offshore examinations such as (1) time constraints on IRS; (2) taxpayers delaying examinations through dilatory, uncooperative tactics on the part of taxpayers; and (3) failure of taxpayers to provide required information. The Revenue Act of 1934 provided the current 3-year statute. In making the change in 1934 from 2 to 3 years, the Senate Report noted that experience showed that the 2-year period was “too short in a substantial number of large cases, resulting oftentimes in hastily prepared determinations, with the result that additional burdens are thrown upon taxpayers in contesting ill-advised assessments. In other cases, revenue is lost by reason of the fact that sufficient time is not allowed for disclosure of all the facts.” As discussed above, Congress has also provided exceptions to this 3-year assessment period. For example, the exception for filing a false or fraudulent return dates back to the Revenue Act of 1916. Where this exception applies, the assessment can be made at any time. Similarly, the exception for significant omissions of gross income dates back to the Revenue Act of 1934. Where this exception applies, the tax may be assessed at any time within 6 years after the return is filed. According to the legislative history for the 1934 Act, this provision was added to enlarge the scope of the existing exception allowed for false or fraudulent returns while limiting the exception where a taxpayer may have made an honest mistake and it would be unfair to keep the statute open indefinitely. The exception to the statute of limitations for failure to report certain foreign transactions dates back to the Taxpayer Relief Act of 1997. This exception was included and grouped along with certain other changes designed to simplify formation and operation of international joint ventures. More recently, Congress changed the statute to provide IRS with additional time to make assessments in the case of unreported listed transactions. With the American Jobs Creation Act of 2004, Congress extended the statute for unreported listed transactions for 1 year after the earlier of (1) the date the information required to be reported is provided or (2) a material advisor meets the requirements for providing a list of investors in the listed transaction. Listed transactions are complex transactions that manipulate parts of the tax code or regulations and are typically buried among “legitimate” transactions reported on tax returns. Because the transactions are often composed of many pieces located in several parts of a complex tax return, they are essentially hidden from plain sight, which contributes to the difficulty of determining the scope of the abusive shelter problem. Often lacking economic substance or a business purpose other than generating tax benefits, abusive shelters are promoted by some tax professionals, often in confidence, for significant fees, sometimes with the participation of tax-indifferent parties, such as foreign or tax-exempt entities. They may involve unnecessary steps and flow-through entities, such as partnerships, which make detection of these transactions more difficult. The transactions are marketed to wealthy individuals, large corporations, and small business taxpayers. Section 6111 of the Internal Revenue Code requires the promoter or other tax shelter organizer to report such transactions with IRS. Further, Department of the Treasury regulations require promoters to maintain lists of investors who have entered into the transactions and investors to disclose the transactions into which they have entered. In a March 2006 report, for example, the Treasury Inspector General for Tax Administration (TIGTA) described a type of listed transaction called Son of Boss (Bond and Option Sales Strategies). According to TIGTA, this transaction used flow-through entities, such as partnerships, and various financial products to add steps and complexity to transactions that had little or no relationship to the investor’s business or the asset sale creating the sheltered gain. TIGTA further explained that the losses generated from the transactions were often reported among “legitimate” items in several parts of the tax return. TIGTA concluded that taken together, these characteristics, especially the use of flow-through entities, made it very difficult for IRS to detect the Son of Boss abusive tax shelter through its traditional process of screening returns individually for questionable items. TIGTA noted that examinations of abusive tax shelters can take significant amounts of time even for the most experienced examiners because such shelters often involve complex, technical transactions that take on different variations and require examining multiple flow-through entities to make a proper tax determination. At the time of our review, IRS representatives stated that sufficient time had not elapsed to determine to what extent, if any, the 1-year extension for unreported listed transactions improved examination effectiveness. An IRS analyst explained, however, that the 1-year extension resulted in increased disclosures of previously undisclosed listed transactions. This analyst stated that 35 taxpayers made 74 separate disclosures about previously unreported listed transactions and that 8 of these 74 disclosures were duplicates. At the state level, California recently extended its statute from 4 to 8 years for taxpayers that invest in an abusive tax shelter (ATS) transaction. Such transactions include IRS listed transactions and other schemes of particular importance to California. According to the California Legislative Analyst’s Office (LAO), the key feature of these transactions is that they have no true economic purpose but exist solely for reasons of tax avoidance. Among their characteristics is the use of (1) pass-through entities such as partnerships, (2) third party facilitators, and (3) offshore accounts or facilitators. The LAO further explained that ATS transactions can be quite difficult to identify and often even harder to understand, even for trained tax auditors. As with IRS, California experienced increased disclosure as a result of extending its assessment period from 4 to 8 years for taxpayers involved in ATS transactions. A California FTB manager stated that the newly enacted 8-year statute had not been applied because most tax shelter examinations are closed within the normal 4-year period or by requesting voluntary waivers. It should be noted that California’s assessment period is 1 year longer than the federal 3-year assessment period. The FTB manager also cited two sources of examinations in which the normal 4-year statute had expired but taxpayers were willing to work to resolve their tax shelter issues. These sources were the Self Compliance Letters and the California Tax Shelter Resolution Initiative. The California FTB used a self compliance letter to solicit amended returns from taxpayers for at least 1 year in which the 4-year statute had expired. This letter cited the 8- year statute. At the time of our review, 13 taxpayers filed amended returns, which reported tax and interest of about $2.3 million. Additional penalties may apply to these 13 taxpayers. Another 48 taxpayers agreed to file amended returns with estimated taxes and penalties of about $7 million. Under the California Resolution Initiative, the FTB was accepting applications and drafting closing agreements with another 181 taxpayers who had at least 1 tax year for which the 4-year statute had either expired or was about to expire. The justification for extending the statute for unreported listed transactions at the federal level and for ATS transactions in California generally involved qualitative factors. A House of Representatives Report accompanying the American Jobs Creation Act of 2004 states that “some taxpayers and their advisors have been employing dilatory tactics and failing to cooperate with IRS in an attempt to avoid liability because of the expiration of the statute of limitations. The Committee accordingly believes that it is appropriate to extend the statute of limitations for unreported listed transactions.” While not enacted, Senate bill 476 (CARE Act of 2003) included a provision similar to the provision of the American Jobs Creation Act of 2004 that extended the statute for unreported listed transactions. A Senate Report accompanying Senate bill 476 states that “…extending the statute of limitations if a taxpayer required to disclose a listed transaction fails to do so will afford IRS additional time to discover the transaction if the taxpayer does not disclose it.” Similarly, the California LAO stated that the time extension for ATS transactions will allow the FTB to “more fully develop cases that represent ATS activity and result in a greater sustainment rate at the appeal level.” In addition to affording more time for IRS to discover undisclosed transactions, the Senate report accompanying Senate bill 476 also stated that “extending the statute of limitations if a taxpayer required to disclose a listed transaction fails to do so will encourage taxpayers to provide the required disclosure….” In analyzing the legislation that extended the California assessment period from 4 to 8 years, the California FTB noted that “some taxpayers will continue to engage in tax avoidance transactions until the risks and costs of engaging in the transactions are significantly increased.” More generally, tax evasion by some taxpayers can affect the perceptions of other compliant taxpayers about the fairness and equity of our tax system. In its report accompanying Senate bill 476, the Senate Committee on Finance stated that the committee “is aware that individuals and corporations are increasingly using sophisticated transactions to avoid or evade Federal income tax. Such a phenomenon could pose a serious threat to the efficacy of the tax system because of both the potential loss of revenue and the potential threat to the integrity of the self-assessment system.” Similarly, the California LAO concluded that tax avoidance “by some taxpayers shifts the relative tax burden towards taxpayers already in compliance. This principle of fairness has ramifications for the tax system itself. A perception that the tax system is not equitable could result in noncompliance and tax avoidance by an increasing proportion of taxpayers.” The Supreme Court found that statutes of limitations find their justification in necessity and convenience. According to a Supreme Court opinion, statutes of limitations are practical and pragmatic devices to spare the court from litigation of stale claims, and the citizen from being put to his defense after memories have faded, witnesses have died or disappeared, and evidence has been lost. The opinion goes on to say that statutes of limitations are by definition arbitrary. Historically, the assessment statute of limitations has varied in length. For example, the Revenue Act of 1919 set the statute of limitations for tax assessments at 5 years. The statute was changed to 2 years in 1932. The current 3-year statute stems from the Revenue Act of 1934. As described above, Congress granted changes and exceptions to the statute over the years to address various types of enforcement problems. Given the similarities between the enforcement problems exhibited by offshore examinations and the enforcement problems giving rise to past changes and exceptions to the statute, precedent exists for changing the statute for offshore examinations. Changing the statute for offshore examinations would necessitate weighing advantages and disadvantages. If Congress wishes to change the statute for examinations where offshore compliance is the major issue, certain design options, such as limiting any examination and possible assessment to those issues attributable to offshore transactions or only suspending the statute while IRS is waiting for taxpayer responses to IRS data requests, might mitigate some of the disadvantages of the statute extension. Changing the statute for examinations in which offshore transactions are a major enforcement problem will require weighing both advantages and disadvantages. In addition to advantages, such as fairness or deterrence, mentioned earlier as justification for extending the statute for unreported listed transactions and ATS transactions, interested parties from various organizations that represent taxpayers or work with tax issues mentioned other advantages and disadvantages for an exception to the statute for offshore examinations. For example, they mentioned the ability of IRS to look back at several tax years once an offshore scheme is identified as an advantage of such an exception. On the other hand, they mentioned that such an exception would further complicate the tax code by adding another provision that would most likely include complicated criteria addressing offshore transactions. Table 7 summarizes their views on such an exception in general. In commenting on an exception to the statute for offshore examinations, these interested parties also pointed out advantages and disadvantages for various design options that could be used to implement such an exception. These options relate to (1) the scope of an exception and (2) the way in which IRS is afforded additional time to address the enforcement problems presented by offshore examinations. Scope refers to (1) which taxpayers will be subject to the exception and (2) the extent to which the exception allows IRS to examine a tax return. The way in which IRS is afforded additional time refers to (1) an extension to the statute, such as for an additional 3 years from the filing date of a tax return or (2) a suspension of the statute pending resolution of a compliance problem, such as slow taxpayer response to IRS records requests. A suspension is triggered by a specified event or action. Table 8 presents the views of these interested parties on the advantages and disadvantages of these design options. If Congress wishes to change the statute for examinations where offshore compliance is a compliance problem, several of the design options mentioned by interested parties might mitigate some of the disadvantages of a statute exception for such examinations. To help clarify their suggestions, we also developed some hypothetical examples to illustrate their points. Specific suggestions that we heard included the following: Making an exception apply to all taxpayers having offshore accounts/entities may mitigate concerns about taxpayer uncertainty and lack of closure. Limiting any examination and possible assessment only to those issues attributable to offshore transactions might mitigate concerns about unfairly exposing taxpayers to open-ended IRS examinations or “fishing expeditions” that could result in assessments for issues unrelated to offshore transactions. For example, an examination triggered by a taxpayer possessing an offshore credit card could enable the IRS to examine depreciation expense for the plant and equipment used in the taxpayer’s domestic business, which the taxpayer might perceive as unfair. Suspending the statute until a specific issue is resolved, such as taxpayers not responding promptly to IRS requests for records, might mitigate concerns about an across-the-board extension of the 3-year assessment period. Specifying a length of time for an initial extension, such as 1 year, and requiring a court or review board’s approval for any subsequent extensions might also mitigate taxpayer concerns about potential IRS abuse of an exception to the statute for offshore examinations. This option might allay concerns about unwarranted application by IRS of a case-by- case exception to the statute. Establishing a materiality test might mitigate concerns that IRS would focus on taxpayers having insignificant issues. This test could be, for example, (1) any amount greater than a percentage of a specific amount shown on a tax return such as 20 percent of total assets for taxpayers operating a business or (2) any amount greater than an absolute dollar amount such as any amount greater than $10,000. This option might allay concerns about including all taxpayers, particularly those having legitimate offshore transactions that are not substantial in value. Limiting the exception to a case-by-case approach might mitigate concerns about taxpayers being unfairly subjected to an extended assessment period when they have legitimate offshore transactions. For example, an exception to the statute could be limited to taxpayers identified on client lists of known promoters of offshore schemes. This option might allay concerns about including all taxpayers, particularly those having legitimate reasons for offshore transactions. Maintaining symmetry between the statute for assessments and the statute for refunds by matching any exception to the statute for assessments with the same exception to the statute for refunds might mitigate taxpayer concerns about the unfairness or one-sidedness of an exception to the statute for assessments. If the statute was suspended until taxpayers respond to IRS request for records, for example, the statute for refunds should also be suspended until the taxpayers respond to the request. Assuring access to IRS appeals procedures and to the Tax Court might mitigate taxpayers’ concerns about the potential for IRS abuse as well as provide due process should they decide to challenge IRS’s use of such an exception to the statute. For example, procedures requiring TIGTA to investigate any taxpayer allegations of denial of due process could be mandated. As with all forms of tax evasion, it is important that IRS pursue offshore tax evasion because it adds to the tax gap, increases the tax burden on honest taxpayers, and poses a threat to the integrity and fairness of our tax system by adversely affecting voluntary compliance when honest taxpayers come to believe that other people are getting away with not paying their fair share. Offshore tax evasion is special, though, in that the examinations that IRS pursues typically take much longer to develop and examine because of the inherent difficulty in identifying and obtaining information from foreign sources, the often dilatory and uncooperative tactics on the part of taxpayers and their representatives, and the technical complexity of the examinations. Nevertheless, the statute of limitations that applies to offshore examinations is the same as applies to all returns. This leads to some suspected tax evasion that IRS identifies going unexamined when revenue agents and managers choose not to start work on offshore examinations because there is too little time remaining under the statute or choose to cut work off early in order to avoid a barred statute. There are exceptions that permit IRS to continue examinations past the 3-year point and still make assessments, but in many offshore examinations IRS has only 3 years to complete its work. Furthermore, taking an examination past the 3- year point in anticipation of finding fraud or one of the other exceptions permitted under the statute represents a gamble by IRS that the investment of additional examination resources will ultimately result in an assessment being allowed under the law. Past Congresses have recognized the need for statute exceptions in the face of similar compliance and enforcement obstacles. In the case of the statute exception for unreported listed transactions, Congress delegated to IRS the responsibility for defining the specific circumstances triggering the exception. A statute exception for offshore examinations that balances the additional layers of difficulty for IRS in detecting and examining offshore cases with fairness to taxpayers involved in legitimate offshore financial activity would strengthen IRS’s efforts to combat offshore tax evasion. Additional time to complete examinations would give IRS greater flexibility in choosing which examinations to open and when to close them. This would likely lead to fewer examinations where revenue agents abandon the pursuit of apparent noncompliance simply because they are running out of time. In order to provide IRS with additional flexibility in combating offshore tax evasion schemes, Congress should make an exception to the 3-year civil statute of limitations assessment period for taxpayers involved in offshore financial activity. Similar to Congress’s approach to unreported listed transactions, Congress may wish to establish a process wherein IRS would identify the types of offshore activity to which a statute exception would apply. We received e-mail and oral comments from IRS’s SB/SE division and the IRS General Counsel’s office about a draft of this report. The officials making comments noted that a longer statute for offshore examinations makes sense and should enhance compliance. They also discussed how the offshore-to-nonoffshore comparisons in the draft of this report were typically made for all types of examinations, rather than only of field examinations. They observed that field examinations are by far the most common type of examination used for offshore tax evasion cases and suggested that a comparison of just field examinations would also be useful to the reader. We agreed and we changed our discussion of offshore-to-nonoffshore examinations to include comparisons both of all types of examinations collectively and field examinations alone. Also in their comments, IRS officials clarified other technical and legal issues, which we incorporated in this report where appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to the Chairman and Ranking Member, House Committee on Ways and Means; the Secretary of the Treasury; the Commissioner of Internal Revenue; and other interested parties. Copies will be made available to others upon request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512-9110 or brostekm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. The objectives of this report were to (1) compare the length of and recommended assessments yielded by offshore and nonoffshore examinations and determine the effect of the 3-year statute of limitations on recommended offshore assessments, (2) determine whether or not enforcement problems posed by offshore examinations are similar to those where Congress has previously granted an exception to the statute, and (3) identify possible advantages and disadvantages of an exception to the statute for offshore examinations. To compare the length of and recommended assessments yielded by offshore and non-offshore examination cases and determine the effect of the statute of limitations on offshore assessments, we examined the Internal Revenue Service (IRS) Audit Information Management System Reference (AIMS) database, which holds all IRS’s data about completed examinations. The database included a variety of taxpayers, including individuals, businesses, and corporations, including large corporations. We analyzed fiscal years 2002 through 2005, the most recent years for which IRS had data at the time of our evaluation. We grouped all examinations maintained in the AIMS database by whether they were offshore examinations (as determined by the project code under which all examinations are categorized) or not offshore examinations. We found that there were both offshore and nonoffshore examinations represented among all of the types of taxpayers in AIMS with the exception of excise tax examinations, which were only found in the nonoffshore subset. We used the AIMS data to analyze the number of days cases spent in both development and examination and the recommended assessments from both offshore and nonoffshore examinations. We further subdivided the data to compare only field examinations, because these were the most common type of offshore examination. To assess the reliability of the AIMS data, we reviewed AIMS documentation, and conducted electronic testing of key variables. Based on this work, we determined that the AIMS data were sufficiently reliable for our purposes. We spoke with 17 IRS revenue agents and managers with expertise in the offshore area about their experience in conducting and closing offshore examinations. We also examined 12 offshore examination case files to gain an understanding of the circumstances that IRS revenue agents face in dealing with noncompliant taxpayers. We spoke with IRS representatives to gain an understanding of how cases are identified for examination, and to determine the process by which an offshore case is developed and examined. In addition, we reviewed various IRS documents related to the statute of limitations on assessments, including exceptions to the statute. To determine whether or not enforcement problems posed by offshore cases are similar to those where Congress granted an exception to the statute in the past, we identified enforcement problems posed by offshore examinations. To do so, we examined IRS’s AIMS database, examined case files and spoke with IRS representatives. We also identified enforcement problems where Congress granted an exception to the statute in the past. To do so, we researched the history of the federal statute of limitations for assessments. We also reviewed legislation proposed between 2003 and 2006 that included references to either offshore tax evasion or the statute of limitations. This included the American Jobs Creation Act of 2006 and other legislative proposals related to the statute. In addition, we reviewed reports prepared by the Treasury Inspector General for Tax Administration and California state agencies related to tax avoidance issues and the statute. We supplemented these reviews with discussions with representatives of the California Franchise Tax Board. To identify advantages and disadvantages of granting an exception to the statute for offshore examinations, we interviewed representatives of various organizations to obtain views on mandating an exception to the statute for offshore examinations. Such an exception would afford IRS more time to develop and examine offshore examinations. These organizations included the American Association of Attorney—Certified Public Accountants, American Bar Association, American Institute of Certified Public Accountants, National Association of Enrolled Agents, National Association of Tax Professionals, National Society of Accountants, and National Society of Tax Professionals. We also interviewed representatives of various organizations within the Department of the Treasury to obtain their views. These organizations included the IRS Small Business/Self Employed division, the Taxpayer Advocate Service, the IRS Office of Chief Counsel, and the Department of the Treasury Office of Tax Policy. In addition to the contact named above, David Lewis, Assistant Director; Perry Datwyler; Evan Gilman; Shirley Jones; John Mingus; and Jeff Schmerling made key contributions to this report.
Much offshore financial activity is not illegal, but numerous illegal offshore schemes have been devised to hide or disguise the true ownership of income streams and assets. IRS studies show lengthy development times for some offshore cases, which suggests that time or the lack thereof could be an impediment to effectively addressing offshore schemes. GAO was asked to (1) compare offshore and nonoffshore examination cases and determine whether the 3-year statute of limitations reduces offshore assessments, (2) compare enforcement problems posed by offshore cases to those where Congress has previously granted an exception to the statute, and (3) identify possible advantages and disadvantages of an exception to the statute for offshore cases. To address these objectives, GAO analyzed IRS data, reviewed examination files and other documents, and interviewed IRS officials and others in the tax practitioner and policy communities. Examinations involving offshore tax evasion take much more time to develop and complete than other examinations for reasons such as technical complexity and the difficulty of obtaining information from foreign sources. When examinations are completed, the resulting median assessment from an offshore examination is almost three times larger than from other types of examinations. However, due to the 3-year statute, the additional time needed to complete an offshore examination means that IRS sometimes has to prematurely end offshore examinations and sometimes chooses not to open one at all, despite evidence of likely noncompliance. Although data were not available to measure the effect of the statute on assessments, IRS agents and managers told GAO that overall assessments for offshore cases are lower than they would be if IRS had more time to work these cases. Some offshore examinations exhibit enforcement problems similar to those where Congress has granted a statute change or exception in the past. For example, Congress changed the statute for certain abusive tax shelters that involved technical complexity and dilatory tactics on the part of taxpayers. Through discussions with IRS officials and others in the tax practitioner and policy communities, GAO identified advantages and disadvantages to such an exception. Advantages included increased flexibility for IRS to direct enforcement resources to egregious cases of noncompliance and a possible deterrent to future noncompliance. Disadvantages included increased uncertainty and lack of closure for taxpayers. Our commenters also discussed design options to mitigate some of the disadvantages of a statute extension, such as making an exception apply to all taxpayers having offshore accounts/entities, and thereby, mitigating taxpayer uncertainty and lack of closure.
USPTO helps promote industrial and technological progress in the United States and strengthen the national economy by administering the laws relating to patents and trademarks. A critical part of its mission is examining patent applications and issuing patents. A patent is a property right granted by the U.S. government to an inventor who secures, generally for 20 years from the date of initial application in the United States, his or her exclusive right to make, use, offer for sale, or sell the invention in exchange for disclosing it. The number of patent filings to USPTO continues to grow and, by 2009, the agency is projecting receipt of over 450,000 patent applications annually. Patent processing essentially involves three phases: pre-examination, examination, and post-examination. The process begins when an applicant files a patent application and pays a filing fee. During the pre-examination phase, patent office staff document receipt of the application and process the application fee, scan and convert the paper documents to electronic format, and conduct an initial review of the application and classify it by subject matter. During the subsequent examination phase, the application is assigned to a patent examiner with expertise in the subject area who searches existing U.S. and foreign patents, journals, and other literature and, as necessary, contacts the applicant to resolve questions and obtain additional information to determine whether the proposed invention can be patented. Examiners document their determinations on the applications in formal correspondence, referred to as office actions. Applicants may abandon their applications at any time during this process. If the examiner determines that a patent is warranted, a supervisor reviews and approves it and the applicant is informed of the outcome. The application then enters the post-examination phase and, upon payment of an “issue fee,” a patent is granted and published. Historically, the time from the date that a patent application is filed to the date that the patent is either granted or the application is abandoned has been called “patent pendency.” Because of long-standing concerns about the increasing volume and complexity of patent applications, USPTO has been undertaking projects to automate its patent process for about the past two decades. In 1983, the agency began one of its most substantial projects—the Automated Patent System (APS)—with the intent of automating all aspects of the patent process. APS was to be deployed in 1990 and, when completed, consist of five integrated subsystems that would (1) fully automate incoming patent applications; (2) allow examiners to electronically search the text of granted U.S. patents and access selected abstracts of foreign patents; (3) scan and allow examiners to retrieve, display, and print images of U.S. patents; (4) help examiners classify patents; and (5) support on-demand printing of copies of patents. In reporting on APS more than 10 years following its inception, we noted that USPTO had deployed and was operating and maintaining certain parts of the system, supporting text search, limited document imaging, order- entry and patent printing, and classification activities. However, our report raised concerns about the agency’s ability to adequately plan and manage this major project, pointing out that its processes for exercising effective management control over APS were weak. Ultimately, USPTO never fully developed and deployed APS to achieve the integrated, end-to- end patent processing system that it envisioned. The agency reported spending approximately $1 billion on this initiative from 1983 through 2002. In addition, in 1998, the agency implemented an Internet-based electronic filing system at a reported cost of $10 million, enabling applicants to submit their applications online. Further, through 2002, the agency continued to enhance its capabilities that enabled examiners to search patent images and text, and upgraded its patent application classification and tracking systems. To help the agency address the challenges of reviewing an increased volume of more complex patent applications and of reducing the length of time it takes to process them, Congress passed a law requiring USPTO to improve patent quality, implement electronic government, and reduce pendency. In response to the law, in June 2002, the agency embarked on an aggressive 5-year modernization plan outlined in its 21st Century Strategic Plan, which was updated to include stakeholder input and re- released in February 2003. The strategic plan outlines 38 initiatives related to the patent organization that focus on three crosscutting strategic themes: capability, productivity, and agility. The capability theme focuses on efforts to enhance patent quality through workforce and process improvements; the productivity theme focuses on efforts to decrease the pendency of patent applications; and the agility theme focuses on initiatives to electronically process patent applications. To fully fund the initiatives in its strategic plan, the agency requested authority from Congress to increase the user fees it collects from applicants and to spend all of these fees on patent processing. Legislation enacted in December 2004 increased the fees available to USPTO; however, the increases are only effective for fiscal years 2005 and 2006. As was its intent with APS, USPTO has continued to pursue a paperless, end-to-end, automated patent process. In 2001, the agency initiated its Tools for Electronic Application Management (TEAM) automation project, aiming to deliver an end-to-end capability to process patent applications electronically by fiscal year 2006. Under the TEAM concept, the agency had planned to integrate its existing electronic filing system and the classification and search capabilities from the earlier APS project with new document management and workflow capabilities, and with image- and text-based processing of patent applications to achieve a sophisticated means of handling documents and tracking patent applications throughout the examination process. By implementing image- and text-based capabilities, the agency had anticipated that patent examiners would be able to view and process applications online, as well as manipulate and annotate text within a patent application, thus eliminating manual functions and improving processing accuracy, reliability, and productivity, as well as the quality of the patents that are granted. With the issuance of its 21st Century Strategic Plan, however, USPTO altered its approach to accomplishing patent automation. The strategic plan, among other things, identified the agency’s high-level information technology goals for fully automating the patent process as part of the 5- year modernization effort. It incorporated automation concepts from the TEAM project, but announced an accelerated goal of delivering an operational system to electronically process patent applications by October 1, 2004, earlier than had been scheduled under TEAM. In carrying out its patent automation plans, USPTO has delivered a number of important processing capabilities through the various information systems that it has implemented. For example, an automated search capability, available since 1986, has eliminated the need for patent examiners to manually search for prior art in paper files, and the classification and fee accounting capabilities have facilitated assigning applications to the correct subject areas and managing collections of applicable fees. In addition, the electronic filing system that has existed since 1998 has enabled applicants to file their applications with the agency via the Internet. Using the Internet, patent applicants also can review the status of their applications online and the public can electronically access and search existing published patents. Further, an imaging system implemented in August 2004, called the Image File Wrapper, has given USPTO the capability to scan patent applications and related documents, which can then be stored in a database and retrieved and reviewed online. The agency’s progress in implementing its automated patent functions is illustrated in figure 1. Nonetheless, even with the progress that has been made, collectively, these automated functions have not provided the fully integrated, electronic patent processing capability articulated in the agency’s automation plans. Two of the key systems that it is relying on to further enhance its capabilities—the electronic filing system and the Image File Wrapper—have not yielded the processing improvements that the agency has deemed essential to successfully operate in a fully integrated, electronic environment. Specifically, in implementing its electronic filing system, USPTO had projected significant increases in processing efficiencies and quality by providing patent applicants the capability to file online, thus alleviating the need for them to send paper applications to the agency or for patent office staff to manually key application data into the various processing systems. However, even after enhancements in 2002 and 2004, the system did not produce the level of usage among patent filers that the agency had anticipated. For example, although USPTO’s preliminary justification for acquiring the electronic filing system had projected an estimated usage rate of 30 percent in fiscal year 2004, patent officials reported that, as of April 2005, fewer than 2 percent of all patent applications were being submitted to the agency via this system. As a result, anticipated processing efficiencies and quality improvements through eliminating the manual re- keying of application data have not been realized. In September 2004, USPTO convened a forum of senior officials representing the largest U.S. corporate and patent law firm filers to identify causes of patent applicants’ dissatisfaction with the electronic filing system and determine how to increase the number of patents being filed electronically. According to the report resulting from this forum, the majority of participants viewed the system as cumbersome, time- consuming, costly, inherently risky, and lacking a business case to justify its usage. Among the barriers to system usage that the participants specifically identified were (1) users’ lack of a perceived benefit from filing applications electronically, (2) liability concerns associated with filers’ unsuccessful use of the system or unsuccessful transmission of patent applications to USPTO, and (3) significant disruptions to filers’ normal office/corporate processes and workflow caused by factors such as difficulty in using the automated tools and the inability to download necessary software through firewalls. Several concerns raised during the forum mirrored those that USPTO had earlier identified in a 1997 analysis of a prototype for electronic filing. However, at the time of our review, the agency had not completed plans to show how it would address the concerns regarding use of the electronic filing system. The agency’s Image File Wrapper also had not resulted in critical patent processing improvements. The system includes image technology for storage and maintenance of records associated with patent applications and provides the capability to scan each page of a submitted paper application and convert the pages into electronic images. Patent examiners in a majority of the focus groups that we conducted commented that the system had provided them with the ability to easily access patent applications and related information. In addition, patent officials stated that the system had enabled multiple users to simultaneously access patent applications. Nonetheless, patent officials acknowledged that the system had experienced performance and usability problems. Specifically, in speaking about the system’s performance, the officials and agency documentation stated that, after its implementation, the Image File Wrapper had been unavailable for extended periods of time or had experienced slow response times, resulting in decreased productivity. To lessen the impact of this problem, patent officials said they had developed a backup tool to store images of an examiner’s most recent applications, which can be accessed when the Image File Wrapper is not available. Further, in commenting on this matter, the USPTO director stated that the system’s performance had begun to show improvement. Regarding the usability of the system, patent officials and focus group results indicated that the Image File Wrapper did not fully meet processing needs. For example, the officials stated that, as an image-based system, the Image File Wrapper did not fully enable patent examiners to electronically search, manipulate, or track and log changes to application text, which were key processing features emphasized in the agency’s automation plans. The examiners also commented that a limited capability to convert images to text, which was intended to assist them in copying and reusing information contained in patent files, was error-prone, contributing to their need to download and print the applications for review. Further, because the office’s legacy systems were not integrated with the Image File Wrapper, examiners were required to manually print correspondence from these systems, which then had to be scanned into the Image File Wrapper in order to be included as part of an applicant’s electronic file. Patent and Office of Chief Information Officer (OCIO) officials largely attributed the system’s performance and usability problems to the agency’s use of software that it acquired from the European Patent Office. The officials explained that, to meet the accelerated date for delivering an operational system as outlined in its strategic plan, the agency had decided in 2002 to acquire and use a document-imaging system owned by the European Patent Office, called ePhoenix, rather than develop the integrated patent processing system that had been described in its automation plans. According to the officials, the director, at that time, had considered ePhoenix to be the most appropriate solution for further implementing USPTO’s electronic patent processing capabilities given (1) pressures from Congress and from customers and stakeholders to implement an electronic patent processing system more quickly than originally planned and (2) the agency’s impending move to its new facility in Alexandria, Virginia, which did not include provisions for transferring and storing paper patent applications. However, they indicated that the original design of the ePhoenix system had not been compatible with USPTO’s technical platform for electronic patent processing. Specifically, they stated that the European Patent Office had designed the system to support only the printing of files for subsequent manual reviews, rather than for electronic review and processing. In addition, they stated that the system had not been designed for integration with other legacy systems or to incorporate additional capabilities, such as text processing, with the existing imaging capability. Further, an official of the European Patent Office noted that ePhoenix had supported their office’s much smaller volume of patent applications. Thus, with USPTO’s patent application workload being approximately twice as large as that of its European counterpart, the agency placed greater stress on the system than it was originally designed to accommodate. OCIO officials told us that, although they had tested certain aspects of the system’s capability, many of the problems encountered in using the system were not revealed until after the system was deployed and operational. Patent and OCIO officials acknowledged that the agency had purchased ePhoenix although senior officials were aware that the original design of the system had not been compatible with USPTO’s technological platform for electronic patent processing. They stated that, despite knowing about the problems and risks associated with using the software, the agency had nonetheless proceeded with this initiative because senior officials, including the former USPTO director, had stressed their preference for using ePhoenix in order to expedite the implementation of a system. Patent and OCIO officials acknowledged that management judgment, rather than a rigorous analysis of costs, benefits, and alternatives, had driven the agency’s decision to use this system. To a significant extent, USPTO’s difficulty in realizing intended improvements through its electronic filing system and Image File Wrapper can be attributed to the fact that the agency took an ad hoc approach to planning and managing its implementation of these systems, driven in part by its accelerated schedule for implementing an automated patent processing capability. The Clinger-Cohen Act of 1996, as well as information technology best practices and our prior reviews, emphasize the need for agencies to undertake information technology projects based on well-established business cases that articulate agreed-upon business and technical requirements; effectively analyze project alternatives, costs, and benefits; include measures for tracking projects through their life cycle against cost, schedule, benefit, and performance targets; and ultimately, provide the basis for credible and informed decision making and project management. Yet, patent officials did not rely on established business cases to guide their implementation of these key automation initiatives. The absence of sound project planning and management for these initiatives has left the agency without critical capabilities, such as text processing, and consequently, has impeded its successful transition to an integrated and paperless patent processing environment. The Under Secretary of Commerce for Intellectual Property, who serves as the director of USPTO, stated at the conclusion of our review that he recognized and intended to implement measures to address the weaknesses in the agency’s planning and management of its automated patent systems. USPTO’s ineffective planning for and management of its patent automation projects, in large measure, can be attributed to enterprise- level, systemic weaknesses in the agency’s information technology investment management processes. A key requirement of the Clinger- Cohen Act is that agencies have established processes, such as capital planning and investment control, to help ensure that information technology projects are implemented at acceptable costs and within reasonable and expected time frames, and contribute to tangible, observable improvements in mission performance. Such processes guide the selection, management, and evaluation of information technology investments by aiding management in considering whether to undertake a particular investment in information systems and providing a means to obtain necessary information regarding the progress of an investment in terms of cost, capability of the system to meet specified requirements, timeliness, and quality. Further, our Enterprise Architecture Framework emphasizes that information technology projects should show evidence of compliance with the organization’s enterprise architecture, which serves as a blueprint for systematically and completely defining an organization’s current (baseline) operational and technology environment and as a roadmap toward the desired (target) state. Effective implementation of an enterprise architecture can facilitate an agency by informing, guiding, and constraining the decisions being made for the agency, and subsequently decrease the risk of buying and building systems that are duplicative, incompatible, and unnecessarily costly to maintain and interface. At the time of our study, USPTO had begun instituting certain essential information technology investment management mechanisms, such as a framework for its enterprise architecture and components of a capital planning and investment control process. However, it had not yet established the necessary linkages between its enterprise architecture and its capital planning and investment control process to ensure that its automation projects would comply with the architecture or fully instituted enforcement mechanisms for investment management. For example, USPTO drafted a capital planning and investment control guide in June 2004 and issued an agency administrative order on its integrated investment decision practices in February 2005. However, according to senior officials, many of the processes and procedures in the guide had not been completed and fully implemented and it was unclear how the agency administrative order was being applied to investments. In addition, while the agency had completed the framework for its enterprise architecture, it had not aligned its business processes and information technology in accordance with the architecture. According to OCIO officials, the architecture review board responsible for enforcing compliance with the architecture was not yet in place; thus, current architecture reviews were of an advisory nature and were not required for system implementation. Our analysis of architecture review documents that system officials provided for the electronic filing system and the Image File Wrapper confirmed that the agency had not rigorously assessed either of these systems’ compliance with the enterprise architecture. Adding to these conditions, a study commissioned by the agency in 2004 found that its Office of Chief Information Officer was not organized to help the agency accomplish the goals in its automation strategy and that its investment management processes did not ensure appropriate reviews of automation initiatives. USPTO has an explicit responsibility to ensure that the automation initiatives that it is counting on to enhance its overall patent process are consistent with the agency’s priorities and needs and are supported by the necessary planning and management to successfully accomplish this. At the conclusion of our review, the agency’s director and its chief information officer acknowledged the need to strengthen the agency’s investment management processes and practices and to effectively apply them to USPTO’s patent automation initiatives. Since 2000, USPTO has also taken steps intended to help attract and retain a qualified patent examination workforce. The agency has enhanced its recruiting efforts and has used many human capital flexibilities to attract and retain qualified patent examiners. However, during the past 5 years, its recruiting efforts and use of benefits have not been consistently sustained, and officials and examiners at all levels in the agency told us that the economy has more of an impact on their ability to attract and retain examiners than any actions taken by the agency. Consequently, how USPTO’s actions will affect its long-term ability to maintain a highly qualified workforce is unclear. While the agency has been able to meet its hiring goals, attrition has recently increased. USPTO’s recent recruiting efforts have incorporated several measures that we and others identified as necessary to attract a qualified workforce. First, in 2003, to help select qualified applicants, the agency identified the knowledge, skills, and abilities that examiners need to effectively fulfill their responsibilities. Second, in 2004, its permanent recruiting team, composed of senior and line managers, participated in various recruiting events, such as job fairs, conferences sponsored by professional societies, and visits to the 10 schools that the agency targeted based on the diversity of their student population and the strength of their engineering and science programs. Finally, for 2005, USPTO developed a formal recruiting plan that, among other things, identified hiring goals for each technology center and described the agency’s efforts to establish ongoing partnerships with the 10 target schools. In addition, the agency trained its recruiters in effective interviewing techniques to help them better describe the production system and incorporated references to the production-oriented work environment in its recruitment literature. USPTO has also used many of the human capital benefits available under federal personnel regulations to attract and retain qualified patent examiners. Among other benefits, it has offered recruitment bonuses ranging from $600 to over $10,000; a special pay rate for patent examiners that is 10 percent above federal salaries for comparable jobs; non-competitive promotion to the full performance level; and flexible working schedules, including the ability to schedule hours off during midday. According to many of the supervisors and examiners who participated in our focus groups, these benefits were a key reason they were attracted to the agency and are a reason they continue to stay. The benefits that examiners most frequently cited as important were the flexible working schedules and competitive salaries. However, it is too soon to determine the long-term effect of the agency’s efforts, in part because neither its recruiting efforts nor the human capital benefits have been consistently sustained due to budgetary constraints. For example, in 2002 the agency suspended reimbursements to examiners for law school tuition because of funding limitations, although it resumed the reimbursements in 2004 when funding became available. Examiners in our focus groups expressed dissatisfaction with the inconsistent availability of these benefits, in some cases saying that the suspension of benefits, such as law school tuition reimbursement, provided them an incentive to leave the agency. More recently, in March 2005, USPTO proposed to eliminate or modify other benefits, such as the ability of examiners to earn credit hours and to set their own work schedules. Another, and possibly the most important, factor adding to the uncertainty of USPTO’s recruiting efforts is the unknown potential impact of the economy, which, according to agency officials and examiners, has a greater effect on recruitment and retention than any actions the agency may take. Both agency officials and examiners told us that when the economy picks up, more examiners tend to leave the agency and fewer qualified candidates are attracted to it. On the other hand, when there is a downturn in the economy, the agency’s ability to attract and retain qualified examiners increases because of perceived job security and competitive pay. When discussing their reasons for joining USPTO, many examiners in our focus groups cited job security and the lack of other employment opportunities, making comments such as, “I had been laid off from my prior job, and this was the only job offer I got at the time.” This relationship between the economy and USPTO’s hiring and retention success is part of the reason why the agency has met its hiring goals for the last several years. However, the agency has recently experienced a rise in attrition rates. In particular, a high level of attrition among younger, less experienced examiners could affect its efforts to maintain a highly qualified patent examination workforce. Attrition of examiners with 3 years or less experience is a significant loss for the agency because considerable time and dollar resources are invested to help new examiners become proficient during their first few years. While USPTO has undertaken a number of important and necessary actions to attract and retain qualified patent examiners, it continues to face three long-standing human capital challenges which, if not addressed, could also undermine its recent efforts. First, although organizations with effective human capital models have strategies to communicate with employees and involve them in decision making, the lack of good communication and collaboration has been a long-standing problem at USPTO. We found that the agency does not have a formal communication strategy and does not actively seek input from examiners on key management decisions. Most of the emphasis is on enhanced communication among managers but not between managers and other levels of the organization, such as patent examiners. Patent examiners and supervisory patent examiners in our focus groups frequently stated that communication with agency management was poor and that managers provided them with inadequate or no information, creating an atmosphere of distrust of management. The examiners also said that management was out of touch with them and their concerns and that communication with the managers tended to be one way and hierarchical, with little opportunity for feedback. Management officials told us that informal feedback can always be provided by anyone in the organization—for example, through an e-mail to anyone in management. The lack of communication between management and examiners is exacerbated by the contentious working relationship between management and union officials and by the complexity of the rules about what level of communication can occur between managers and examiners without involving the union. Some managers alluded to this contentious relationship as one of the reasons why they had limited communication with patent examiners, who are represented by the union even if they decide not to join it. Specifically, they believed they could not solicit the input of employees directly without engaging the union. Another official, however, told us that nothing prevents the agency from having “town hall” type meetings to discuss potential changes in policies and procedures, as long as the agency does not promise examiners a benefit that impacts their working conditions. Union officials agreed that USPTO can invite comments from examiners on a plan or proposal; however, if the proposal concerns a negotiating issue, the agency must consult the examiners’ union, which is their exclusive representative with regard to working conditions. Second, human capital models suggest that agencies should periodically assess their monetary awards systems to ensure that they help attract and retain qualified staff. However, patent examiners’ awards are based largely on the number of applications they process, and the assumptions on which application processing quotas are based have not been updated since 1976. Patent examiners and management have differing opinions on whether these assumptions need to be updated. Examiners in our focus groups told us that, in the last several decades, the tasks associated with and the complexity of processing applications have greatly increased while the time allowed has not. As a result, many of the examiners and supervisory patent examiners in our focus groups and respondents to previous agency surveys reported that examiners do not have enough time to conduct high- quality reviews of patent applications. The examiners noted that these inadequate time frames create a stressful work environment and are cited in the agency’s exit surveys as a primary reason that examiners leave the agency. In contrast, USPTO managers had a different perspective on the production model and its impact on examiners. They stated that the time estimates used in establishing production quotas do not need to be adjusted because the efficiencies gained through actions such as the greater use of technology have offset the time needed to address the greater complexity of the applications and the increase in the number of claims. Moreover, they said that for an individual examiner, reviews of applications that take more time than the estimated average are generally offset by other reviews that take less time. Finally, counter to current workforce models, USPTO does not require ongoing technical education for patent examiners, which could negatively affect the quality of its patent examination workforce. Instead, the agency requires newly hired examiners to take extensive training only during their first year of employment; all subsequent required training is focused on developing legal expertise. Almost all patent examiners are required to take a range of ongoing training in legal matters, including patent law. In contrast, patent examiners are not required to undertake any ongoing training to maintain expertise in their area of technology, even though the agency acknowledges that such training is important, especially for electrical and electronic engineers. In 2001 the agency stated, “Engineers who fail to keep up with the rapid changes in technology, regardless of degree, risk technological obsolescence.” However, agency officials told us that examiners automatically maintain currency with their technical fields by just doing their job. Patent examiners and supervisory patent examiners disagreed, stating that the literature they review in applications is outdated, particularly in rapidly evolving technologies. The agency does offer some voluntary in-house training, such as technology fairs and industry days at which scientists and others are invited to present lectures to patent examiners that will help keep them current on the technical aspects of their work. In addition, the agency offers voluntary external training and, for a small number of examiners, pays conference or workshop registration fees. Agency officials could provide no data on the extent to which examiners have taken advantage of such training opportunities. In carrying out its strategic plan to become a more productive and responsive organization, our work found that USPTO has made greater progress in implementing its initiatives to make the patent organization more capable by improving the quality of examiners’ skills and work processes than it has in implementing its productivity and agility initiatives aimed at decreasing the length of time to process a patent application and improving electronic processing. Specifically, of the activities planned for completion by December 2004, the agency has fully or partially implemented all 23 of the initiatives related to its capability theme to improve the skills of employees, enhance quality assurance, and alter the patent process through legislative and rule changes. In contrast, it has partially implemented only 1 of the 4 initiatives related to the productivity theme to restructure fees and expand examination options for patent applicants and has fully or partially implemented 7 of the 11 initiatives related to the agility theme to increase electronic processing of patent applications and to reduce examiners’ responsibilities for literature searches. Table 1 provides our assessment of each of the strategic plan initiatives. Agency officials primarily cited the need for additional funding as the main reason that some initiatives have not been implemented. With passage of the legislation in December 2004 to restructure and increase the fees available to USPTO, the agency is reevaluating the feasibility of many initiatives that it had deferred or suspended. In summary, through its attempts to implement an integrated, paperless patent process over the past two decades, USPTO has delivered a number of important automated capabilities. Nonetheless, after spending over a billion dollars on its efforts, the agency is still not yet effectively positioned to process patent applications in a fully automated environment. Moreover, when and how it will actually achieve this capability is uncertain. Largely as a result of ineffective planning and management of its automated capabilities, system performance and usability problems have limited the effectiveness of key systems that the agency has implemented to support critical patent processes. Although USPTO’s director and its chief information officer have recognized the need to improve the agency’s planning and management of its automation initiatives, weaknesses in key information technology management processes needed to guide the agency’s investments in patent automation, such as incomplete capital planning and investment controls, could preclude their ability to successfully accomplish this. Thus, the agency risks further implementing information technology that does not support its needs and that threatens its overall goal of achieving a fully electronic capability to process its growing patent application workload. Further, to improve its ability to attract and retain the highly educated and qualified patent examiners it needs, USPTO has taken steps recognized by experts as characteristic of highly effective organizations. However, without an effective communication strategy and a collaborative culture that includes all layers of the organization, the agency’s efforts could be undermined. The absence of effective communication and collaboration has created distrust and a significant divide between management and examiners on important issues such as the appropriateness of the production model and the need for technical training. Unless the agency begins to develop an open, transparent, and collaborative work environment, its efforts to hire and retain examiners may be adversely affected in the long run. Overall, while USPTO has progressed in implementing strategic plan initiatives aimed at improving its organizational capability, the agency attributes its limited implementation of other initiatives intended to reduce pendency and improve electronic patent application processing primarily to the need for additional funding. Given the weaknesses in USPTO’s information technology investment management processes, we recommended that the agency, before proceeding with any new patent automation initiatives, (1) reassess and, where necessary, revise its approach for implementing and achieving effective use of information systems supporting a fully automated patent process; (2) establish disciplined processes for planning and managing the development of patent systems based on well-established business cases; and (3) fully institute and enforce information technology investment management processes and practices to ensure that its automation initiatives support the agency’s mission and are aligned with its enterprise architecture. Further, in light of its need for a more transparent and collaborative work environment, we recommended that the agency develop formal strategies to (1) improve communication between management and patent examiners and between management and union officials and (2) foster greater collaboration among all levels of the organization to resolve key issues, such as the assumptions underlying the quota system and the need for required technical training. USPTO generally agreed with our findings, conclusions, and recommendations regarding its patent automation initiatives and acknowledged the need for improvements in its management processes by, for example, developing architectural linkages to the planning process and implementing a capital planning and investment control guide. Nonetheless, the agency stated that it only partially agreed with several material aspects of our assessment, including our recommendation that it reassess its approach to automating its patent process. Further, the agency generally agreed with our findings, conclusions, and recommendations regarding its workforce collaboration and suggested that it would develop a communication plan and labor management strategy, and educate and inform employees about progress on initiatives, successes, and lessons learned. In addition, USPTO indicated that it would develop a more formalized technical program for patent examiners to ensure that their skills are fresh and ready to address state-of-the-art technology. Mr. Chairman, this concludes our statement. We would be pleased to respond to any questions that you or other Members of the Committee may have at this time. For further information, please contact Anu K. Mittal at (202) 512-3841or Linda D. Koontz at (202) 512-6240. They can also be reached by e-mail at mittala@gao.gov and koontzl@gao.gov, respectively. Other individuals making significant contributions to this testimony were Valerie C. Melvin, Assistant Director; Cheryl Williams, Assistant Director; Mary J. Dorsey, Vijay D’Souza, Nancy Glover, Vondalee R. Hunt, and Alison D. O’Neill. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The United States Patent and Trademark Office (USPTO) is responsible for issuing patents that protect new ideas and investments in innovation and creativity. However, the volume and complexity of patent applications to the agency have increased significantly in recent years, lengthening the time needed to process patents and raising concerns about the validity of the patents that are issued. Annual applications have grown from about 185,000 to over 350,000 in the last 10 years and are projected to exceed 450,000 by 2009. Coupled with this growth is a backlog of about 750,000 applications. Further complicating matters, the agency has faced difficulty in attracting and retaining qualified staff to process patent applications. USPTO has long recognized the need to automate its patent processing and, over the past two decades, has been engaged in various automation projects. More recently, in its strategic plan, the agency articulated its approach for accelerating the use of automation and improving workforce quality. In two reports issued in June 2005, GAO discussed progress and problems that the agency faces as it develops its electronic patent process, its actions to attain a highly qualified patent examination workforce, and the progress of the agency's strategic plan initiatives. At Congress's request, this testimony summarizes the results of these GAO reports. As part of its strategy to achieve an electronic patent process, USPTO had planned to deliver an operational patent system by October 2004. It has delivered important capabilities, for example, allowing patent applicants to electronically file and view the status of their applications and the public to search published patents. Nonetheless, after spending over $1 billion on its efforts from 1983 through 2004, the agency has not yet developed the fully integrated, electronic patent process articulated in its automation plans, and when and how it will achieve this process is uncertain. Key systems that the agency is relying on to help reach this goal--an electronic application filing system and a document imaging system--have not provided capabilities that are essential to operating in a fully electronic environment. Contributing to this situation is the agency's ineffective planning for and management of its patent automation initiatives, due in large measure to enterprise-level, systemic weaknesses in its information technology investment management processes. Although the agency has begun instituting essential investment management mechanisms, such as its enterprise architecture framework, it has not yet finalized its capital planning and investment control process, or established necessary linkages between the process and its architecture to guide the development and implementation of its information technology. The Under Secretary of Commerce for Intellectual Property and the agency's chief information officer have acknowledged the need for improvement. USPTO has taken steps to attract and retain a highly qualified patent examination workforce by, for example, enhancing its recruiting efforts and using many of the human capital benefits available under federal personnel regulations. However, it is too soon to determine the long-term success of the agency's efforts because they have been in place only a short time and have not been consistently sustained because of budgetary constraints. Long-term uncertainty about the agency's hiring and retention success is also due to the unknown impact of the economy. In the past, the agency had more difficulty recruiting and retaining staff when the economy was doing well. Further, USPTO faces three long-standing challenges that could undermine its efforts: the lack of an effective strategy to communicate and collaborate with examiners, outdated assumptions in production quotas that it uses to reward examiners, and the lack of required ongoing technical training for examiners. Patent examiners said the lack of a collaborative work environment has lowered morale and created an atmosphere of distrust between management and patent examiners. Overall, USPTO has made more progress in implementing its strategic plan initiatives aimed at increasing its patent processing capability through workforce and process improvements than in its initiatives to decrease patent pendency and improve electronic processing. It has fully or partially implemented all 23 capability initiatives, but only 8 of 15 initiatives to reduce patent pendency and improve electronic processing. The agency cited a lack of funding as the primary reason for not implementing all initiatives.